|国家预印本平台
首页|Exploring LLM Autoscoring Reliability in Large-Scale Writing Assessments Using Generalizability Theory

Exploring LLM Autoscoring Reliability in Large-Scale Writing Assessments Using Generalizability Theory

Exploring LLM Autoscoring Reliability in Large-Scale Writing Assessments Using Generalizability Theory

来源:Arxiv_logoArxiv
英文摘要

This study investigates the estimation of reliability for large language models (LLMs) in scoring writing tasks from the AP Chinese Language and Culture Exam. Using generalizability theory, the research evaluates and compares score consistency between human and AI raters across two types of AP Chinese free-response writing tasks: story narration and email response. These essays were independently scored by two trained human raters and seven AI raters. Each essay received four scores: one holistic score and three analytic scores corresponding to the domains of task completion, delivery, and language use. Results indicate that although human raters produced more reliable scores overall, LLMs demonstrated reasonable consistency under certain conditions, particularly for story narration tasks. Composite scoring that incorporates both human and AI raters improved reliability, which supports that hybrid scoring models may offer benefits for large-scale writing assessments.

Dan Song、Won-Chan Lee、Hong Jiao

语言学汉语计算技术、计算机技术

Dan Song,Won-Chan Lee,Hong Jiao.Exploring LLM Autoscoring Reliability in Large-Scale Writing Assessments Using Generalizability Theory[EB/OL].(2025-07-29)[2025-08-10].https://arxiv.org/abs/2507.19980.点此复制

评论