A Learning-Based Inexact ADMM for Solving Quadratic Programs
A Learning-Based Inexact ADMM for Solving Quadratic Programs
Convex quadratic programs (QPs) constitute a fundamental computational primitive across diverse domains including financial optimization, control systems, and machine learning. The alternating direction method of multipliers (ADMM) has emerged as a preferred first-order approach due to its iteration efficiency - exemplified by the state-of-the-art OSQP solver. Machine learning-enhanced optimization algorithms have recently demonstrated significant success in speeding up the solving process. This work introduces a neural-accelerated ADMM variant that replaces exact subproblem solutions with learned approximations through a parameter-efficient Long Short-Term Memory (LSTM) network. We derive convergence guarantees within the inexact ADMM formalism, establishing that our learning-augmented method maintains primal-dual convergence while satisfying residual thresholds. Extensive experimental results demonstrate that our approach achieves superior solution accuracy compared to existing learning-based methods while delivering significant computational speedups of up to $7\times$, $28\times$, and $22\times$ over Gurobi, SCS, and OSQP, respectively. Furthermore, the proposed method outperforms other learning-to-optimize methods in terms of solution quality. Detailed performance analysis confirms near-perfect compliance with the theoretical assumptions, consequently ensuring algorithm convergence.
Xi Gao、Jinxin Xiong、Linxin Yang、Akang Wang、Weiwei Xu、Jiang Xue
计算技术、计算机技术自动化基础理论
Xi Gao,Jinxin Xiong,Linxin Yang,Akang Wang,Weiwei Xu,Jiang Xue.A Learning-Based Inexact ADMM for Solving Quadratic Programs[EB/OL].(2025-05-14)[2025-06-09].https://arxiv.org/abs/2505.09391.点此复制
评论