|国家预印本平台
首页|LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama

LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama

LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama

来源:Arxiv_logoArxiv
英文摘要

As large language models (LLMs) rapidly advance, evaluating their performance is critical. LLMs are trained on multilingual data, but their reasoning abilities are mainly evaluated using English datasets. Hence, robust evaluation frameworks are needed using high-quality non-English datasets, especially low-resource languages (LRLs). This study evaluates eight state-of-the-art (SOTA) LLMs on Latvian and Giriama using a Massive Multitask Language Understanding (MMLU) subset curated with native speakers for linguistic and cultural relevance. Giriama is benchmarked for the first time. Our evaluation shows that OpenAI's o1 model outperforms others across all languages, scoring 92.8% in English, 88.8% in Latvian, and 70.8% in Giriama on 0-shot tasks. Mistral-large (35.6%) and Llama-70B IT (41%) have weak performance, on both Latvian and Giriama. Our results underscore the need for localized benchmarks and human evaluations in advancing cultural AI contextualization.

Randu Karisa、Naome A. Etori、Kevin Lu、Arturs Kanepajs

乌拉尔语系(芬兰-乌戈尔语系)非洲诸语言语言学信息传播、知识传播计算技术、计算机技术

Randu Karisa,Naome A. Etori,Kevin Lu,Arturs Kanepajs.LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama[EB/OL].(2025-03-14)[2025-05-18].https://arxiv.org/abs/2503.11911.点此复制

评论