|国家预印本平台
首页|Pipelining Split Learning in Multi-hop Edge Networks

Pipelining Split Learning in Multi-hop Edge Networks

Pipelining Split Learning in Multi-hop Edge Networks

来源:Arxiv_logoArxiv
英文摘要

To support large-scale model training, split learning (SL) enables multiple edge devices/servers to share the intensive training workload. However, most existing works on SL focus solely on two-tier model splitting. Moreover, while some recent works have investigated the model splitting and placement problems for multi-hop SL, these solutions fail to overcome the resource idleness issue, resulting in significant network idle time. In this work, we propose a pipelined SL scheme by addressing the joint optimization problem of model splitting and placement (MSP) in multi-hop edge networks. By applying pipeline parallelism to SL, we identify that the MSP problem can be mapped to a problem of minimizing the weighted sum of a bottleneck cost function (min-max) and a linear cost function (min-sum). Based on graph theory, we devise a bottleneck-aware shortest-path algorithm to obtain the optimal solution. Besides, given the MSP outcomes, we also derive the closed-form solution to the micro-batch size in the pipeline. Finally, we develop an alternating optimization algorithm of MSP and micro-batch size to solve the joint optimization problem to minimize the end-to-end training latency. Extensive simulations have demonstrated the significant advantages of our algorithm compared to existing benchmarks without pipeline parallelism.

Xuanheng Li、Xianhao Chen、Wei Wei、Zheng Lin、Tao Li

计算技术、计算机技术

Xuanheng Li,Xianhao Chen,Wei Wei,Zheng Lin,Tao Li.Pipelining Split Learning in Multi-hop Edge Networks[EB/OL].(2025-05-07)[2025-06-12].https://arxiv.org/abs/2505.04368.点此复制

评论