Improved convergence rates for the Difference-of-Convex algorithm
Improved convergence rates for the Difference-of-Convex algorithm
We consider a difference-of-convex formulation where one of the terms is allowed to be hypoconvex (or weakly convex). We first examine the precise behavior of a single iteration of the Difference-of-Convex algorithm (DCA), giving a tight characterization of the objective function decrease. This requires distinguishing between eight distinct parameter regimes. Our proofs are inspired by the performance estimation framework, but are much simplified compared to similar previous work. We then derive sublinear DCA convergence rates towards critical points, distinguishing between cases where at least one of the functions is smooth and where both functions are nonsmooth. We conjecture the tightness of these rates for four parameter regimes, based on strong numerical evidence obtained via performance estimation, as well as the leading constant in the asymptotic sublinear rate for two more regimes.
Panagiotis Patrinos、Fran?ois Glineur、Teodor Rotaru
数学计算技术、计算机技术
Panagiotis Patrinos,Fran?ois Glineur,Teodor Rotaru.Improved convergence rates for the Difference-of-Convex algorithm[EB/OL].(2024-03-25)[2025-08-16].https://arxiv.org/abs/2403.16864.点此复制
评论