Probe by Gaming: A Game-based Benchmark for Assessing Conceptual Knowledge in LLMs
Probe by Gaming: A Game-based Benchmark for Assessing Conceptual Knowledge in LLMs
Concepts represent generalized abstractions that enable humans to categorize and reason efficiently, yet it is unclear to what extent Large Language Models (LLMs) comprehend these semantic relationships. Existing benchmarks typically focus on factual recall and isolated tasks, failing to evaluate the ability of LLMs to understand conceptual boundaries. To address this gap, we introduce CK-Arena, a multi-agent interaction game built upon the Undercover game, designed to evaluate the capacity of LLMs to reason with concepts in interactive settings. CK-Arena challenges models to describe, differentiate, and infer conceptual boundaries based on partial information, encouraging models to explore commonalities and distinctions between closely related concepts. By simulating real-world interaction, CK-Arena provides a scalable and realistic benchmark for assessing conceptual reasoning in dynamic environments. Experimental results show that LLMs' understanding of conceptual knowledge varies significantly across different categories and is not strictly aligned with parameter size or general model capabilities. The data and code are available at the project homepage: https://ck-arena.site.
Shuhang Xu、Weijian Deng、Yixuan Zhou、Fangwei Zhong
语言学
Shuhang Xu,Weijian Deng,Yixuan Zhou,Fangwei Zhong.Probe by Gaming: A Game-based Benchmark for Assessing Conceptual Knowledge in LLMs[EB/OL].(2025-05-23)[2025-06-07].https://arxiv.org/abs/2505.17512.点此复制
评论