Infinity Instruct: Scaling Instruction Selection and Synthesis to Enhance Language Models
Infinity Instruct: Scaling Instruction Selection and Synthesis to Enhance Language Models
Large Language Models (LLMs) demonstrate strong performance in real-world applications, yet existing open-source instruction datasets often concentrate on narrow domains, such as mathematics or coding, limiting generalization and widening the gap with proprietary models. To bridge this gap, we introduce Infinity-Instruct, a high-quality instruction dataset designed to enhance both foundational and chat capabilities of LLMs through a two-phase pipeline. In Phase 1, we curate 7.4M high-quality foundational instructions (InfInstruct-F-7.4M) from over 100M samples using hybrid data selection techniques. In Phase 2, we synthesize 1.5M high-quality chat instructions (InfInstruct-G-1.5M) through a two-stage process involving instruction selection, evolution, and diagnostic filtering. We empirically evaluate Infinity-Instruct by fine-tuning several open-source models, including Mistral, LLaMA, Qwen, and Yi, and observe substantial performance gains across both foundational and instruction following benchmarks, consistently surpassing official instruction-tuned counterparts. Notably, InfInstruct-LLaMA3.1-70B outperforms GPT-4-0314 by 8.6\% on instruction following tasks while achieving comparable foundational performance. These results underscore the synergy between foundational and chat training and offer new insights into holistic LLM development. Our dataset\footnote{https://huggingface.co/datasets/BAAI/Infinity-Instruct} and codes\footnote{https://gitee.com/li-touch/infinity-instruct} have been publicly released.
Jijie Li、Li Du、Hanyu Zhao、Bo-wen Zhang、Liangdong Wang、Boyan Gao、Guang Liu、Yonghua Lin
计算技术、计算机技术
Jijie Li,Li Du,Hanyu Zhao,Bo-wen Zhang,Liangdong Wang,Boyan Gao,Guang Liu,Yonghua Lin.Infinity Instruct: Scaling Instruction Selection and Synthesis to Enhance Language Models[EB/OL].(2025-06-09)[2025-07-03].https://arxiv.org/abs/2506.11116.点此复制
评论