Heterogeneous Low-Bandwidth Pre-Training of LLMs
Yazan Obeidi Amir Sarfi Joel Lidin Paul Janson Eugene Belilovsky
作者信息
Abstract
Pre-training large language models (LLMs) increasingly requires distributed compute, yet bandwidth constraints make it difficult to scale beyond well-provisioned datacenters-especially when model parallelism forces frequent, large inter-device communications. We study whether SparseLoCo, a low-communication data parallel method based on infrequent synchronization and sparse pseudo-gradient exchange, can be combined with low-bandwidth pipeline model parallelism via activation and activation-gradient compression. We introduce a heterogeneous distributed training framework where some participants host full replicas on high-bandwidth interconnects, while resource-limited participants are grouped to jointly instantiate a replica using pipeline parallelism with subspace-projected inter-stage communication. To make the recently introduced subspace pipeline compression compatible with SparseLoCo, we study a number of adaptations. Across large-scale language modeling experiments (178M-1B parameters) on standard pretraining corpora, we find that activation compression composes with SparseLoCo at modest cost, while selective (heterogeneous) compression consistently improves the loss-communication tradeoff relative to compressing all replicas-especially at aggressive compression ratios. These results suggest a practical path to incorporating low-bandwidth model parallelism and heterogeneous participants into LLM pre-training.引用本文复制引用
Yazan Obeidi,Amir Sarfi,Joel Lidin,Paul Janson,Eugene Belilovsky.Heterogeneous Low-Bandwidth Pre-Training of LLMs[EB/OL].(2026-01-05)[2026-01-09].https://arxiv.org/abs/2601.02360.学科分类
计算技术、计算机技术
评论