TASE: Token Awareness and Structured Evaluation for Multilingual Language Models
TASE: Token Awareness and Structured Evaluation for Multilingual Language Models
While large language models (LLMs) have demonstrated remarkable performance on high-level semantic tasks, they often struggle with fine-grained, token-level understanding and structural reasoning--capabilities that are essential for applications requiring precision and control. We introduce TASE, a comprehensive benchmark designed to evaluate LLMs' ability to perceive and reason about token-level information across languages. TASE covers 10 tasks under two core categories: token awareness and structural understanding, spanning Chinese, English, and Korean, with a 35,927-instance evaluation set and a scalable synthetic data generation pipeline for training. Tasks include character counting, token alignment, syntactic structure parsing, and length constraint satisfaction. We evaluate over 30 leading commercial and open-source LLMs, including O3, Claude 4, Gemini 2.5 Pro, and DeepSeek-R1, and train a custom Qwen2.5-14B model using the GRPO training method. Results show that human performance significantly outpaces current LLMs, revealing persistent weaknesses in token-level reasoning. TASE sheds light on these limitations and provides a new diagnostic lens for future improvements in low-level language understanding and cross-lingual generalization. Our code and dataset are publicly available at https://github.com/cyzcz/Tase .
Chenzhuo Zhao、Xinda Wang、Yue Huang、Junting Lu、Ziqian Liu
语言学常用外国语汉藏语系阿尔泰语系(突厥-蒙古-通古斯语系)印欧语系计算技术、计算机技术
Chenzhuo Zhao,Xinda Wang,Yue Huang,Junting Lu,Ziqian Liu.TASE: Token Awareness and Structured Evaluation for Multilingual Language Models[EB/OL].(2025-08-07)[2025-08-18].https://arxiv.org/abs/2508.05468.点此复制
评论