|国家预印本平台
首页|Model Parallelism With Subnetwork Data Parallelism

Model Parallelism With Subnetwork Data Parallelism

Model Parallelism With Subnetwork Data Parallelism

来源:Arxiv_logoArxiv
英文摘要

Distributed pre-training of large models at scale often imposes heavy memory demands on individual nodes and incurs significant intra-node communication costs. We propose a novel alternative approach that reduces the memory requirements by training small, structured subnetworks of the model on separate workers. Unlike pipelining, our method avoids inter-node activation communication and maintains bandwidth requirements that are comparable to or lower than standard data parallel communication schemes based on all-reduce. We evaluate two subnetwork construction strategies guided by the principle of ensuring uniform representation of each parameter across the distributed training setup. Our results show that the stochastic block dropping technique consistently outperforms the width-wise subnetwork construction previously explored in federated learning. We empirically attribute this superior performance to stronger gradient alignment in subnetworks that retain blocks having skip connections. Preliminary experiments highlight the promise of our approach, achieving a 20-40% reduction in memory usage without any loss in performance.

Vaibhav Singh、Zafir Khalid、Edouard Oyallon、Eugene Belilovsky

计算技术、计算机技术

Vaibhav Singh,Zafir Khalid,Edouard Oyallon,Eugene Belilovsky.Model Parallelism With Subnetwork Data Parallelism[EB/OL].(2025-07-11)[2025-07-22].https://arxiv.org/abs/2507.09029.点此复制

评论