Language-specific Neurons Do Not Facilitate Cross-Lingual Transfer
Language-specific Neurons Do Not Facilitate Cross-Lingual Transfer
Multilingual large language models (LLMs) aim towards robust natural language understanding across diverse languages, yet their performance significantly degrades on low-resource languages. This work explores whether existing techniques to identify language-specific neurons can be leveraged to enhance cross-lingual task performance of lowresource languages. We conduct detailed experiments covering existing language-specific neuron identification techniques (such as Language Activation Probability Entropy and activation probability-based thresholding) and neuron-specific LoRA fine-tuning with models like Llama 3.1 and Mistral Nemo. We find that such neuron-specific interventions are insufficient to yield cross-lingual improvements on downstream tasks (XNLI, XQuAD) in lowresource languages. This study highlights the challenges in achieving cross-lingual generalization and provides critical insights for multilingual LLMs.
Soumen Kumar Mondal、Sayambhu Sen、Abhishek Singhania、Preethi Jyothi
计算技术、计算机技术
Soumen Kumar Mondal,Sayambhu Sen,Abhishek Singhania,Preethi Jyothi.Language-specific Neurons Do Not Facilitate Cross-Lingual Transfer[EB/OL].(2025-03-21)[2025-05-23].https://arxiv.org/abs/2503.17456.点此复制
评论