|国家预印本平台
首页|Optimizing Allreduce Operations for Heterogeneous Architectures with Multiple Processes per GPU

Optimizing Allreduce Operations for Heterogeneous Architectures with Multiple Processes per GPU

Optimizing Allreduce Operations for Heterogeneous Architectures with Multiple Processes per GPU

来源:Arxiv_logoArxiv
英文摘要

Large inter-GPU all-reduce operations, prevalent throughout deep learning, are bottlenecked by communication costs. Emerging heterogeneous architectures are comprised of complex nodes, often containing $4$ GPUs and dozens to hundreds of CPU cores per node. Parallel applications are typically accelerated on the available GPUs, using only a single CPU core per GPU while the remaining cores sit idle. This paper presents novel optimizations to large GPU-aware all-reduce operations, extending lane-aware reductions to the GPUs, and notably using multiple CPU cores per GPU to accelerate these operations. These multi-CPU-accelerated GPU-aware lane all-reduces yield speedup of up to $2.45$x for large MPI all-reduces across the NVIDIA A100 GPUs of NCSA's Delta supercomputer. Finally, the approach is extended to NVIDIA's and AMD's collective communication libraries, achieving speedup of up to $1.77$x and $1.71$x, respectively, across $2$ state-of-the-art supercomputers.

Michael Adams、Amanda Bienz

计算技术、计算机技术

Michael Adams,Amanda Bienz.Optimizing Allreduce Operations for Heterogeneous Architectures with Multiple Processes per GPU[EB/OL].(2025-08-18)[2025-09-04].https://arxiv.org/abs/2508.13397.点此复制

评论