|国家预印本平台
首页|Uncovering Cross-Linguistic Disparities in LLMs using Sparse Autoencoders

Uncovering Cross-Linguistic Disparities in LLMs using Sparse Autoencoders

Uncovering Cross-Linguistic Disparities in LLMs using Sparse Autoencoders

来源:Arxiv_logoArxiv
英文摘要

Multilingual large language models (LLMs) exhibit strong cross-linguistic generalization, yet medium to low resource languages underperform on common benchmarks such as ARC-Challenge, MMLU, and HellaSwag. We analyze activation patterns in Gemma-2-2B across all 26 residual layers and 10 languages: Chinese (zh), Russian (ru), Spanish (es), Italian (it), medium to low resource languages including Indonesian (id), Catalan (ca), Marathi (mr), Malayalam (ml), and Hindi (hi), with English (en) as the reference. Using Sparse Autoencoders (SAEs), we reveal systematic disparities in activation patterns. Medium to low resource languages receive up to 26.27 percent lower activations in early layers, with a persistent gap of 19.89 percent in deeper layers. To address this, we apply activation-aware fine-tuning via Low-Rank Adaptation (LoRA), leading to substantial activation gains, such as 87.69 percent for Malayalam and 86.32 percent for Hindi, while maintaining English retention at approximately 91 percent. After fine-tuning, benchmark results show modest but consistent improvements, highlighting activation alignment as a key factor in enhancing multilingual LLM performance.

Richmond Sin Jing Xuan、Jalil Huseynov、Yang Zhang

印欧语系南亚语系(澳斯特罗-亚细亚语系)南印语系(达罗毗荼语系、德拉维达语系)

Richmond Sin Jing Xuan,Jalil Huseynov,Yang Zhang.Uncovering Cross-Linguistic Disparities in LLMs using Sparse Autoencoders[EB/OL].(2025-07-25)[2025-08-10].https://arxiv.org/abs/2507.18918.点此复制

评论