Reliable Disentanglement Multi-view Learning Against View Adversarial Attacks
Reliable Disentanglement Multi-view Learning Against View Adversarial Attacks
Trustworthy multi-view learning has attracted extensive attention because evidence learning can provide reliable uncertainty estimation to enhance the credibility of multi-view predictions. Existing trusted multi-view learning methods implicitly assume that multi-view data is secure. However, in safety-sensitive applications such as autonomous driving and security monitoring, multi-view data often faces threats from adversarial perturbations, thereby deceiving or disrupting multi-view models. This inevitably leads to the adversarial unreliability problem (AUP) in trusted multi-view learning. To overcome this tricky problem, we propose a novel multi-view learning framework, namely Reliable Disentanglement Multi-view Learning (RDML). Specifically, we first propose evidential disentanglement learning to decompose each view into clean and adversarial parts under the guidance of corresponding evidences, which is extracted by a pretrained evidence extractor. Then, we employ the feature recalibration module to mitigate the negative impact of adversarial perturbations and extract potential informative features from them. Finally, to further ignore the irreparable adversarial interferences, a view-level evidential attention mechanism is designed. Extensive experiments on multi-view classification tasks with adversarial attacks show that RDML outperforms the state-of-the-art methods by a relatively large margin. Our code is available at https://github.com/Willy1005/2025-IJCAI-RDML.
Xuyang Wang、Siyuan Duan、Qizhi Li、Guiduo Duan、Yuan Sun、Dezhong Peng
计算技术、计算机技术
Xuyang Wang,Siyuan Duan,Qizhi Li,Guiduo Duan,Yuan Sun,Dezhong Peng.Reliable Disentanglement Multi-view Learning Against View Adversarial Attacks[EB/OL].(2025-05-06)[2025-07-21].https://arxiv.org/abs/2505.04046.点此复制
评论