Inverse Optimization via Learning Feasible Regions
Inverse Optimization via Learning Feasible Regions
We study inverse optimization (IO), where the goal is to use a parametric optimization program as the hypothesis class to infer relationships between input-decision pairs. Most of the literature focuses on learning only the objective function, as learning the constraint function (i.e., feasible regions) leads to nonconvex training programs. Motivated by this, we focus on learning feasible regions for known linear objectives and introduce two training losses along with a hypothesis class to parameterize the constraint function. Our hypothesis class surpasses the previous objective-only method by naturally capturing discontinuous behaviors in input-decision pairs. We introduce a customized block coordinate descent algorithm with a smoothing technique to solve the training problems, while for further restricted hypothesis classes, we reformulate the training optimization as a tractable convex program or mixed integer linear program. Synthetic experiments and two power system applications, including comparisons with state-of-the-art approaches, showcase and validate the proposed approach.
Ke Ren、Peyman Mohajerin Esfahani、Angelos Georghiou
计算技术、计算机技术
Ke Ren,Peyman Mohajerin Esfahani,Angelos Georghiou.Inverse Optimization via Learning Feasible Regions[EB/OL].(2025-05-20)[2025-06-17].https://arxiv.org/abs/2505.15025.点此复制
评论