|国家预印本平台
首页|Value Approximation for Two-Player General-Sum Differential Games with State Constraints

Value Approximation for Two-Player General-Sum Differential Games with State Constraints

Value Approximation for Two-Player General-Sum Differential Games with State Constraints

来源:Arxiv_logoArxiv
英文摘要

Solving Hamilton-Jacobi-Isaacs (HJI) PDEs numerically enables equilibrial feedback control in two-player differential games, yet faces the curse of dimensionality (CoD). While physics-informed neural networks (PINNs) have shown promise in alleviating CoD in solving PDEs, vanilla PINNs fall short in learning discontinuous solutions due to their sampling nature, leading to poor safety performance of the resulting policies when values are discontinuous due to state or temporal logic constraints. In this study, we explore three potential solutions to this challenge: (1) a hybrid learning method that is guided by both supervisory equilibria and the HJI PDE, (2) a value-hardening method where a sequence of HJIs are solved with increasing Lipschitz constant on the constraint violation penalty, and (3) the epigraphical technique that lifts the value to a higher dimensional state space where it becomes continuous. Evaluations through 5D and 9D vehicle and 13D drone simulations reveal that the hybrid method outperforms others in terms of generalization and safety performance by taking advantage of both the supervisory equilibrium values and costates, and the low cost of PINN loss gradients.

Lei Zhang、Mukesh Ghimire、Wenlong Zhang、Zhe Xu、Yi Ren

数学自动化基础理论计算技术、计算机技术

Lei Zhang,Mukesh Ghimire,Wenlong Zhang,Zhe Xu,Yi Ren.Value Approximation for Two-Player General-Sum Differential Games with State Constraints[EB/OL].(2023-11-27)[2025-08-02].https://arxiv.org/abs/2311.16520.点此复制

评论