Doubly Smoothed Optimistic Gradients: A Universal Approach for Smooth Minimax Problems
Doubly Smoothed Optimistic Gradients: A Universal Approach for Smooth Minimax Problems
Smooth minimax optimization problems play a central role in a wide range of applications, including machine learning, game theory, and operations research. However, existing algorithmic frameworks vary significantly depending on the problem structure -- whether it is convex-concave, nonconvex-concave, convex-nonconcave, or even nonconvex-nonconcave with additional regularity conditions. In particular, this diversity complicates the tuning of step-sizes since even verifying convexity (or concavity) assumptions is challenging and problem-dependent. We introduce a universal and single-loop algorithm, Doubly Smoothed Optimistic Gradient Descent Ascent (DS-OGDA), that applies to a broad class of smooth minimax problems. Specifically, this class includes convex-concave, nonconvex-concave, convex-nonconcave, and nonconvex-nonconcave minimax optimization problems satisfying a one-sided Kurdyka-Lojasiewicz (KL) property. DS-OGDA works with a universal single set of parameters for all problems in this class, eliminating the need for prior structural knowledge to determine step-sizes. Moreover, when a particular problem structure in our class is specified, DS-OGDA achieves optimal or best-known performance guarantees. Overall, our results provide a comprehensive and versatile framework for smooth minimax optimization, bridging the gap between convex and nonconvex problem structures and simplifying the choice of algorithmic strategies across diverse applications.
Taoli Zheng、Anthony Man-Cho So、Jiajin Li
计算技术、计算机技术
Taoli Zheng,Anthony Man-Cho So,Jiajin Li.Doubly Smoothed Optimistic Gradients: A Universal Approach for Smooth Minimax Problems[EB/OL].(2025-06-08)[2025-07-03].https://arxiv.org/abs/2506.07397.点此复制
评论