KO: Kinetics-inspired Neural Optimizer with PDE Simulation Approaches
KO: Kinetics-inspired Neural Optimizer with PDE Simulation Approaches
The design of optimization algorithms for neural networks remains a critical challenge, with most existing methods relying on heuristic adaptations of gradient-based approaches. This paper introduces KO (Kinetics-inspired Optimizer), a novel neural optimizer inspired by kinetic theory and partial differential equation (PDE) simulations. We reimagine the training dynamics of network parameters as the evolution of a particle system governed by kinetic principles, where parameter updates are simulated via a numerical scheme for the Boltzmann transport equation (BTE) that models stochastic particle collisions. This physics-driven approach inherently promotes parameter diversity during optimization, mitigating the phenomenon of parameter condensation, i.e. collapse of network parameters into low-dimensional subspaces, through mechanisms analogous to thermal diffusion in physical systems. We analyze this property, establishing both a mathematical proof and a physical interpretation. Extensive experiments on image classification (CIFAR-10/100, ImageNet) and text classification (IMDB, Snips) tasks demonstrate that KO consistently outperforms baseline optimizers (e.g., Adam, SGD), achieving accuracy improvements while computation cost remains comparable.
Mingquan Feng、Yixin Huang、Yifan Fu、Shaobo Wang、Junchi Yan
物理学计算技术、计算机技术
Mingquan Feng,Yixin Huang,Yifan Fu,Shaobo Wang,Junchi Yan.KO: Kinetics-inspired Neural Optimizer with PDE Simulation Approaches[EB/OL].(2025-05-20)[2025-06-04].https://arxiv.org/abs/2505.14777.点此复制
评论