SAS-Bench: A Fine-Grained Benchmark for Evaluating Short Answer Scoring with Large Language Models
SAS-Bench: A Fine-Grained Benchmark for Evaluating Short Answer Scoring with Large Language Models
Subjective Answer Grading (SAG) plays a crucial role in education, standardized testing, and automated assessment systems, particularly for evaluating short-form responses in Short Answer Scoring (SAS). However, existing approaches often produce coarse-grained scores and lack detailed reasoning. Although large language models (LLMs) have demonstrated potential as zero-shot evaluators, they remain susceptible to bias, inconsistencies with human judgment, and limited transparency in scoring decisions. To overcome these limitations, we introduce SAS-Bench, a benchmark specifically designed for LLM-based SAS tasks. SAS-Bench provides fine-grained, step-wise scoring, expert-annotated error categories, and a diverse range of question types derived from real-world subject-specific exams. This benchmark facilitates detailed evaluation of model reasoning processes and explainability. We also release an open-source dataset containing 1,030 questions and 4,109 student responses, each annotated by domain experts. Furthermore, we conduct comprehensive experiments with various LLMs, identifying major challenges in scoring science-related questions and highlighting the effectiveness of few-shot prompting in improving scoring accuracy. Our work offers valuable insights into the development of more robust, fair, and educationally meaningful LLM-based evaluation systems.
Peichao Lai、Kexuan Zhang、Yi Lin、Linyihan Zhang、Feiyang Ye、Jinhao Yan、Yanwei Xu、Conghui He、Yilei Wang、Wentao Zhang、Bin Cui
教育计算技术、计算机技术
Peichao Lai,Kexuan Zhang,Yi Lin,Linyihan Zhang,Feiyang Ye,Jinhao Yan,Yanwei Xu,Conghui He,Yilei Wang,Wentao Zhang,Bin Cui.SAS-Bench: A Fine-Grained Benchmark for Evaluating Short Answer Scoring with Large Language Models[EB/OL].(2025-05-12)[2025-06-18].https://arxiv.org/abs/2505.07247.点此复制
评论