|国家预印本平台
首页|Robust Invariant Representation Learning by Distribution Extrapolation

Robust Invariant Representation Learning by Distribution Extrapolation

Robust Invariant Representation Learning by Distribution Extrapolation

来源:Arxiv_logoArxiv
英文摘要

Invariant risk minimization (IRM) aims to enable out-of-distribution (OOD) generalization in deep learning by learning invariant representations. As IRM poses an inherently challenging bi-level optimization problem, most existing approaches -- including IRMv1 -- adopt penalty-based single-level approximations. However, empirical studies consistently show that these methods often fail to outperform well-tuned empirical risk minimization (ERM), highlighting the need for more robust IRM implementations. This work theoretically identifies a key limitation common to many IRM variants: their penalty terms are highly sensitive to limited environment diversity and over-parameterization, resulting in performance degradation. To address this issue, a novel extrapolation-based framework is proposed that enhances environmental diversity by augmenting the IRM penalty through synthetic distributional shifts. Extensive experiments -- ranging from synthetic setups to realistic, over-parameterized scenarios -- demonstrate that the proposed method consistently outperforms state-of-the-art IRM variants, validating its effectiveness and robustness.

Kotaro Yoshida、Konstantinos Slavakis

计算技术、计算机技术

Kotaro Yoshida,Konstantinos Slavakis.Robust Invariant Representation Learning by Distribution Extrapolation[EB/OL].(2025-05-21)[2025-06-06].https://arxiv.org/abs/2505.16126.点此复制

评论