|国家预印本平台
首页|PyLO: Towards Accessible Learned Optimizers in PyTorch

PyLO: Towards Accessible Learned Optimizers in PyTorch

PyLO: Towards Accessible Learned Optimizers in PyTorch

来源:Arxiv_logoArxiv
英文摘要

Learned optimizers have been an active research topic over the past decade, with increasing progress toward practical, general-purpose optimizers that can serve as drop-in replacements for widely used methods like Adam. However, recent advances -- such as VeLO, which was meta-trained for 4000 TPU-months -- remain largely inaccessible to the broader community, in part due to their reliance on JAX and the absence of user-friendly packages for applying the optimizers after meta-training. To address this gap, we introduce PyLO, a PyTorch-based library that brings learned optimizers to the broader machine learning community through familiar, widely adopted workflows. Unlike prior work focused on synthetic or convex tasks, our emphasis is on applying learned optimization to real-world large-scale pre-training tasks. Our release includes a CUDA-accelerated version of the small_fc_lopt learned optimizer architecture from (Metz et al., 2022a), delivering substantial speedups -- from 39.36 to 205.59 samples/sec throughput for training ViT B/16 with batch size 32. PyLO also allows us to easily combine learned optimizers with existing optimization tools such as learning rate schedules and weight decay. When doing so, we find that learned optimizers can substantially benefit. Our code is available at https://github.com/Belilovsky-Lab/pylo

Paul Janson、Benjamin Therien、Quentin Anthony、Xiaolong Huang、Abhinav Moudgil、Eugene Belilovsky

计算技术、计算机技术

Paul Janson,Benjamin Therien,Quentin Anthony,Xiaolong Huang,Abhinav Moudgil,Eugene Belilovsky.PyLO: Towards Accessible Learned Optimizers in PyTorch[EB/OL].(2025-06-11)[2025-07-18].https://arxiv.org/abs/2506.10315.点此复制

评论