Cooper: A Library for Constrained Optimization in Deep Learning
Cooper: A Library for Constrained Optimization in Deep Learning
Cooper is an open-source package for solving constrained optimization problems involving deep learning models. Cooper implements several Lagrangian-based first-order update schemes, making it easy to combine constrained optimization algorithms with high-level features of PyTorch such as automatic differentiation, and specialized deep learning architectures and optimizers. Although Cooper is specifically designed for deep learning applications where gradients are estimated based on mini-batches, it is suitable for general non-convex continuous constrained optimization. Cooper's source code is available at https://github.com/cooper-org/cooper.
Jose Gallego-Posada、Juan Ramirez、Meraj Hashemizadeh、Simon Lacoste-Julien
计算技术、计算机技术
Jose Gallego-Posada,Juan Ramirez,Meraj Hashemizadeh,Simon Lacoste-Julien.Cooper: A Library for Constrained Optimization in Deep Learning[EB/OL].(2025-04-01)[2025-06-04].https://arxiv.org/abs/2504.01212.点此复制
评论