|国家预印本平台
首页|Accelerating Deep Neural Network Training via Distributed Hybrid Order Optimization

Accelerating Deep Neural Network Training via Distributed Hybrid Order Optimization

Accelerating Deep Neural Network Training via Distributed Hybrid Order Optimization

来源:Arxiv_logoArxiv
英文摘要

Scaling deep neural network (DNN) training to more devices can reduce time-to-solution. However, it is impractical for users with limited computing resources. FOSI, as a hybrid order optimizer, converges faster than conventional optimizers by taking advantage of both gradient information and curvature information when updating the DNN model. Therefore, it provides a new chance for accelerating DNN training in the resource-constrained setting. In this paper, we explore its distributed design, namely DHO$_2$, including distributed calculation of curvature information and model update with partial curvature information to accelerate DNN training with a low memory burden. To further reduce the training time, we design a novel strategy to parallelize the calculation of curvature information and the model update on different devices. Experimentally, our distributed design can achieve an approximate linear reduction of memory burden on each device with the increase of the device number. Meanwhile, it achieves $1.4\times\sim2.1\times$ speedup in the total training time compared with other distributed designs based on conventional first- and second-order optimizers.

Lailong Luo、Shunxian Gu、Chaoqun You、Junxu Xia、Deke Guo、Bangbang Ren

计算技术、计算机技术

Lailong Luo,Shunxian Gu,Chaoqun You,Junxu Xia,Deke Guo,Bangbang Ren.Accelerating Deep Neural Network Training via Distributed Hybrid Order Optimization[EB/OL].(2025-05-02)[2025-06-23].https://arxiv.org/abs/2505.00982.点此复制

评论