A stochastic gradient method for trilevel optimization
A stochastic gradient method for trilevel optimization
With the success that the field of bilevel optimization has seen in recent years, similar methodologies have started being applied to solving more difficult applications that arise in trilevel optimization. At the helm of these applications are new machine learning formulations that have been proposed in the trilevel context and, as a result, efficient and theoretically sound stochastic methods are required. In this work, we propose the first-ever stochastic gradient descent method for solving unconstrained trilevel optimization problems and provide a convergence theory that covers all forms of inexactness of the trilevel adjoint gradient, such as the inexact solutions of the middle-level and lower-level problems, inexact computation of the trilevel adjoint formula, and noisy estimates of the gradients, Hessians, Jacobians, and tensors of third-order derivatives involved. We also demonstrate the promise of our approach by providing numerical results on both synthetic trilevel problems and trilevel formulations for hyperparameter adversarial tuning.
Tommaso Giovannelli、Griffin Dean Kent、Luis Nunes Vicente
计算技术、计算机技术
Tommaso Giovannelli,Griffin Dean Kent,Luis Nunes Vicente.A stochastic gradient method for trilevel optimization[EB/OL].(2025-05-10)[2025-06-24].https://arxiv.org/abs/2505.06805.点此复制
评论