|国家预印本平台
首页|Inertial Quadratic Majorization Minimization with Application to Kernel Regularized Learning

Inertial Quadratic Majorization Minimization with Application to Kernel Regularized Learning

Inertial Quadratic Majorization Minimization with Application to Kernel Regularized Learning

来源:Arxiv_logoArxiv
英文摘要

First-order methods in convex optimization offer low per-iteration cost but often suffer from slow convergence, while second-order methods achieve fast local convergence at the expense of costly Hessian inversions. In this paper, we highlight a middle ground: minimizing a quadratic majorant with fixed curvature at each iteration. This strategy strikes a balance between per-iteration cost and convergence speed, and crucially allows the reuse of matrix decompositions, such as Cholesky or spectral decompositions, across iterations and varying regularization parameters. We introduce the Quadratic Majorization Minimization with Extrapolation (QMME) framework and establish its sequential convergence properties under standard assumptions. The new perspective of our analysis is to center the arguments around the induced norm of the curvature matrix $H$. To demonstrate practical advantages, we apply QMME to large-scale kernel regularized learning problems. In particular, we propose a novel Sylvester equation modelling technique for kernel multinomial regression. In Julia-based experiments, QMME compares favorably against various established first- and second-order methods. Furthermore, we demonstrate that our algorithms complement existing kernel approximation techniques through more efficiently handling sketching matrices with large projection dimensions. Our numerical experiments and real data analysis are available and fully reproducible at https://github.com/qhengncsu/QMME.jl.

Qiang Heng、Caixing Wang

计算技术、计算机技术

Qiang Heng,Caixing Wang.Inertial Quadratic Majorization Minimization with Application to Kernel Regularized Learning[EB/OL].(2025-07-06)[2025-07-16].https://arxiv.org/abs/2507.04247.点此复制

评论