|国家预印本平台
首页|Mpemba Effect in Large-Language Model Training Dynamics: A Minimal Analysis of the Valley-River model

Mpemba Effect in Large-Language Model Training Dynamics: A Minimal Analysis of the Valley-River model

Mpemba Effect in Large-Language Model Training Dynamics: A Minimal Analysis of the Valley-River model

来源:Arxiv_logoArxiv
英文摘要

Learning rate (LR) schedules in large language model (LLM) training often follow empirical templates: warm-up, constant plateau/stable phase, and decay (WSD). However, the mechanistic explanation for this strategy remains underexplored, and the choice of plateau height and decay schedule is largely heuristic. In this paper, we connect training dynamics to a thermodynamic analogy via the Mpemba effect - a phenomenon in which a hotter system cools faster than a colder one when quenched into the same bath. We analyze a class of "valley-river" loss landscapes, where sharp (valley) directions equilibrate quickly, while flatter (river) directions govern global descent. The Mpemba effect provides an explanation for the necessity of the warm-up phase and motivates a high plateau - rather than a low one - for accelerating loss decrease during decay. We show that for certain loss landscapes, there exists an optimal plateau learning rate - the "strong Mpemba point" - at which the slowest mode vanishes, resulting in faster convergence during the decay phase. We derive analytical conditions for its existence and estimate decay dynamics required to preserve the Mpemba advantage. Our minimal model and analysis offer a principled justification for plateau-based schedulers and provide guidance for tuning LR in LLMs with minimal hyperparameter sweep.

Sibei Liu、Zhijian Hu

计算技术、计算机技术

Sibei Liu,Zhijian Hu.Mpemba Effect in Large-Language Model Training Dynamics: A Minimal Analysis of the Valley-River model[EB/OL].(2025-07-06)[2025-07-16].https://arxiv.org/abs/2507.04206.点此复制

评论