|国家预印本平台
首页|dots.llm1 Technical Report

dots.llm1 Technical Report

dots.llm1 Technical Report

来源:Arxiv_logoArxiv
英文摘要

Mixture of Experts (MoE) models have emerged as a promising paradigm for scaling language models efficiently by activating only a subset of parameters for each input token. In this report, we present dots.llm1, a large-scale MoE model that activates 14B parameters out of a total of 142B parameters, delivering performance on par with state-of-the-art models while reducing training and inference costs. Leveraging our meticulously crafted and efficient data processing pipeline, dots.llm1 achieves performance comparable to Qwen2.5-72B after pretraining on 11.2T high-quality tokens and post-training to fully unlock its capabilities. Notably, no synthetic data is used during pretraining. To foster further research, we open-source intermediate training checkpoints at every one trillion tokens, providing valuable insights into the learning dynamics of large language models.

Bi Huo、Bin Tu、Cheng Qin、Da Zheng、Debing Zhang、Dongjie Zhang、En Li、Fu Guo、Jian Yao、Jie Lou、Junfeng Tian、Li Hu、Ran Zhu、Shengdong Chen、Shuo Liu、Su Guang、Te Wo、Weijun Zhang、Xiaoming Shi、Xinxin Peng、Xing Wu、Yawen Liu、Yuqiu Ji、Ze Wen、Zhenhai Liu、Zichao Li、Zilong Liao

计算技术、计算机技术

Bi Huo,Bin Tu,Cheng Qin,Da Zheng,Debing Zhang,Dongjie Zhang,En Li,Fu Guo,Jian Yao,Jie Lou,Junfeng Tian,Li Hu,Ran Zhu,Shengdong Chen,Shuo Liu,Su Guang,Te Wo,Weijun Zhang,Xiaoming Shi,Xinxin Peng,Xing Wu,Yawen Liu,Yuqiu Ji,Ze Wen,Zhenhai Liu,Zichao Li,Zilong Liao.dots.llm1 Technical Report[EB/OL].(2025-06-06)[2025-06-17].https://arxiv.org/abs/2506.05767.点此复制

评论