|国家预印本平台
首页|The Cost of Avoiding Backpropagation

The Cost of Avoiding Backpropagation

The Cost of Avoiding Backpropagation

来源:Arxiv_logoArxiv
英文摘要

Forward-mode automatic differentiation (FmAD) and zero-order (ZO) optimization have been proposed as memory-efficient alternatives to backpropagation (BP) for gradient computation, especially in low-resource settings. However, their practical benefits remain unclear due to two key gaps: a lack of comparison against memory-efficient BP variants, such as activation checkpointing, and a lack of a unified theoretical analysis. This work presents a comprehensive theoretical and empirical comparison of BP, FmAD, and ZO methods. Our theoretical analysis shows that while FmAD, and ZO can reduce memory usage, they incur significant costs in accuracy, convergence speed, and computation compared to BP with checkpointing. These drawbacks worsen with larger models or constrained perturbation budgets. Empirical experiments on large language and vision-language models show that BP with checkpointing outperforms FmAD and ZO variants, including those enhanced with variance reduction, achieving up to 31.1% higher accuracy, 34.8% faster convergence, and 3.8x fewer computations at comparable memory usage. Our results highlight fundamental limitations of FmAD and ZO, and reaffirm BP with checkpointing as the most effective strategy for model training under memory-constrained settings. Our code is available at https://github.com/Astuary/The_Cost_of_Avoiding_Backpropagation.

Kunjal Panchal、Sunav Choudhary、Yuriy Brun、Hui Guan

计算技术、计算机技术

Kunjal Panchal,Sunav Choudhary,Yuriy Brun,Hui Guan.The Cost of Avoiding Backpropagation[EB/OL].(2025-06-27)[2025-07-16].https://arxiv.org/abs/2506.21833.点此复制

评论