Evaluating the Evaluators: Are readability metrics good measures of readability?
Evaluating the Evaluators: Are readability metrics good measures of readability?
Plain Language Summarization (PLS) aims to distill complex documents into accessible summaries for non-expert audiences. In this paper, we conduct a thorough survey of PLS literature, and identify that the current standard practice for readability evaluation is to use traditional readability metrics, such as Flesch-Kincaid Grade Level (FKGL). However, despite proven utility in other fields, these metrics have not been compared to human readability judgments in PLS. We evaluate 8 readability metrics and show that most correlate poorly with human judgments, including the most popular metric, FKGL. We then show that Language Models (LMs) are better judges of readability, with the best-performing model achieving a Pearson correlation of 0.56 with human judgments. Extending our analysis to PLS datasets, which contain summaries aimed at non-expert audiences, we find that LMs better capture deeper measures of readability, such as required background knowledge, and lead to different conclusions than the traditional metrics. Based on these findings, we offer recommendations for best practices in the evaluation of plain language summaries. We release our analysis code and survey data.
Isabel Cachola、Daniel Khashabi、Mark Dredze
语言学
Isabel Cachola,Daniel Khashabi,Mark Dredze.Evaluating the Evaluators: Are readability metrics good measures of readability?[EB/OL].(2025-08-26)[2025-09-05].https://arxiv.org/abs/2508.19221.点此复制
评论