|国家预印本平台
首页|Benchmarking Linguistic Diversity of Large Language Models

Benchmarking Linguistic Diversity of Large Language Models

Benchmarking Linguistic Diversity of Large Language Models

来源:Arxiv_logoArxiv
英文摘要

The development and evaluation of Large Language Models (LLMs) has primarily focused on their task-solving capabilities, with recent models even surpassing human performance in some areas. However, this focus often neglects whether machine-generated language matches the human level of diversity, in terms of vocabulary choice, syntactic construction, and expression of meaning, raising questions about whether the fundamentals of language generation have been fully addressed. This paper emphasizes the importance of examining the preservation of human linguistic richness by language models, given the concerning surge in online content produced or aided by LLMs. We propose a comprehensive framework for evaluating LLMs from various linguistic diversity perspectives including lexical, syntactic, and semantic dimensions. Using this framework, we benchmark several state-of-the-art LLMs across all diversity dimensions, and conduct an in-depth case study for syntactic diversity. Finally, we analyze how different development and deployment choices impact the linguistic diversity of LLM outputs.

Yanzhu Guo、Guokan Shang、Chloé Clavel

语言学

Yanzhu Guo,Guokan Shang,Chloé Clavel.Benchmarking Linguistic Diversity of Large Language Models[EB/OL].(2025-07-25)[2025-08-16].https://arxiv.org/abs/2412.10271.点此复制

评论