|国家预印本平台
首页|Taming LLMs by Scaling Learning Rates with Gradient Grouping

Taming LLMs by Scaling Learning Rates with Gradient Grouping

Taming LLMs by Scaling Learning Rates with Gradient Grouping

来源:Arxiv_logoArxiv
英文摘要

Training large language models (LLMs) poses challenges due to their massive scale and heterogeneous architectures. While adaptive optimizers like AdamW help address gradient variations, they still struggle with efficient and effective parameter-wise learning rate estimation, resulting in training instability, slow convergence, and poor compatibility with parameter-efficient fine-tuning (PEFT) techniques. This work introduces Scaling with Gradient Grouping (SGG), an optimizer wrapper that improves adaptive learning rate estimation by dynamic grouping and group-specific scaling. SGG first groups gradient statistics in each layer into clusters and then applies cluster-specific scaling to calibrate learning rates for each parameter, thus imposing collective group-wise constraints while maintaining precise per-parameter adaptation. Experiments on diverse (M)LLM benchmarks show that SGG integrates seamlessly with existing optimizers, and offers consistent gains and faster convergence over baselines, with various model sizes. Its stability across varying batch sizes and learning rates establishes SGG as a robust choice for LLM optimization.

Siyuan Li、Juanxi Tian、Zedong Wang、Xin Jin、Zicheng Liu、Wentao Zhang、Dan Xu

计算技术、计算机技术

Siyuan Li,Juanxi Tian,Zedong Wang,Xin Jin,Zicheng Liu,Wentao Zhang,Dan Xu.Taming LLMs by Scaling Learning Rates with Gradient Grouping[EB/OL].(2025-06-01)[2025-07-16].https://arxiv.org/abs/2506.01049.点此复制

评论