A Third Paradigm for LLM Evaluation: Dialogue Game-Based Evaluation using clembench
A Third Paradigm for LLM Evaluation: Dialogue Game-Based Evaluation using clembench
There are currently two main paradigms for evaluating large language models (LLMs), reference-based evaluation and preference-based evaluation. The first, carried over from the evaluation of machine learning models in general, relies on pre-defined task instances, for which reference task executions are available. The second, best exemplified by the LM-arena, relies on (often self-selected) users bringing their own intents to a site that routes these to several models in parallel, among whose responses the user then selects their most preferred one. The former paradigm hence excels at control over what is tested, while the latter comes with higher ecological validity, testing actual use cases interactively. Recently, a third complementary paradigm has emerged that combines some of the strengths of these approaches, offering control over multi-turn, reference-free, repeatable interactions, while stressing goal-directedness: dialogue game based evaluation. While the utility of this approach has been shown by several projects, its adoption has been held back by the lack of a mature, easily re-usable implementation. In this paper, we present clembench, which has been in continuous development since 2023 and has in its latest release been optimized for ease of general use. We describe how it can be used to benchmark one's own models (using a provided set of benchmark game instances in English), as well as how easily the benchmark itself can be extended with new, tailor-made targeted tests.
David Schlangen、Sherzod Hakimov、Jonathan Jordan、Philipp Sadler
计算技术、计算机技术
David Schlangen,Sherzod Hakimov,Jonathan Jordan,Philipp Sadler.A Third Paradigm for LLM Evaluation: Dialogue Game-Based Evaluation using clembench[EB/OL].(2025-07-11)[2025-07-25].https://arxiv.org/abs/2507.08491.点此复制
评论