|国家预印本平台
首页|Learning Quasi-LPV Models and Robust Control Invariant Sets with Reduced Conservativeness

Learning Quasi-LPV Models and Robust Control Invariant Sets with Reduced Conservativeness

Learning Quasi-LPV Models and Robust Control Invariant Sets with Reduced Conservativeness

来源:Arxiv_logoArxiv
英文摘要

We present an approach to identify a quasi Linear Parameter Varying (qLPV) model of a plant, with the qLPV model guaranteed to admit a robust control invariant (RCI) set. It builds upon the concurrent synthesis framework presented in [1], in which the requirement of existence of an RCI set is modeled as a control-oriented regularization. Here, we reduce the conservativeness of the approach by bounding the qLPV system with an uncertain LTI system, which we derive using bound propagation approaches. The resulting regularization function is the optimal value of a nonlinear robust optimization problem that we solve via a differentiable algorithm. We numerically demonstrate the benefits of the proposed approach over two benchmark approaches.

Sampath Kumar Mulagaleti、Alberto Bemporad

10.1109/LCSYS.2025.3569637

自动化基础理论

Sampath Kumar Mulagaleti,Alberto Bemporad.Learning Quasi-LPV Models and Robust Control Invariant Sets with Reduced Conservativeness[EB/OL].(2025-05-12)[2025-06-04].https://arxiv.org/abs/2505.07287.点此复制

评论