Sharper Convergence Rates for Nonconvex Optimisation via Reduction Mappings
Sharper Convergence Rates for Nonconvex Optimisation via Reduction Mappings
Many high-dimensional optimisation problems exhibit rich geometric structures in their set of minimisers, often forming smooth manifolds due to over-parametrisation or symmetries. When this structure is known, at least locally, it can be exploited through reduction mappings that reparametrise part of the parameter space to lie on the solution manifold. These reductions naturally arise from inner optimisation problems and effectively remove redundant directions, yielding a lower-dimensional objective. In this work, we introduce a general framework to understand how such reductions influence the optimisation landscape. We show that well-designed reduction mappings improve curvature properties of the objective, leading to better-conditioned problems and theoretically faster convergence for gradient-based methods. Our analysis unifies a range of scenarios where structural information at optimality is leveraged to accelerate convergence, offering a principled explanation for the empirical gains observed in such optimisation algorithms.
Evan Markou、Thalaiyasingam Ajanthan、Stephen Gould
计算技术、计算机技术
Evan Markou,Thalaiyasingam Ajanthan,Stephen Gould.Sharper Convergence Rates for Nonconvex Optimisation via Reduction Mappings[EB/OL].(2025-06-10)[2025-06-29].https://arxiv.org/abs/2506.08428.点此复制
评论