|国家预印本平台
首页|Scaling Fine-Grained MoE Beyond 50B Parameters: Empirical Evaluation and Practical Insights

Scaling Fine-Grained MoE Beyond 50B Parameters: Empirical Evaluation and Practical Insights

Scaling Fine-Grained MoE Beyond 50B Parameters: Empirical Evaluation and Practical Insights

来源:Arxiv_logoArxiv
英文摘要

Mixture of Experts (MoE) architectures have emerged as pivotal for scaling Large Language Models (LLMs) efficiently. Fine-grained MoE approaches - utilizing more numerous, smaller experts - have demonstrated potential in improving model convergence and quality. This work proposes a set of training recipes and provides a comprehensive empirical evaluation of fine-grained MoE, directly comparing its scaling properties against standard MoE configurations for models with up to 56B total (17B active) parameters. We investigate convergence speed, model performance on downstream benchmarks, and practical training considerations across various setups. Overall, at the largest scale we show that fine-grained MoE achieves better validation loss and higher accuracy across a set of downstream benchmarks. This study offers empirical grounding and practical insights for leveraging fine-grained MoE in the development of future large-scale models.

Jakub Krajewski、Marcin Chochowski、Daniel Korzekwa

计算技术、计算机技术

Jakub Krajewski,Marcin Chochowski,Daniel Korzekwa.Scaling Fine-Grained MoE Beyond 50B Parameters: Empirical Evaluation and Practical Insights[EB/OL].(2025-06-03)[2025-06-21].https://arxiv.org/abs/2506.02890.点此复制

评论