|国家预印本平台
首页|SimMLM: A Simple Framework for Multi-modal Learning with Missing Modality

SimMLM: A Simple Framework for Multi-modal Learning with Missing Modality

SimMLM: A Simple Framework for Multi-modal Learning with Missing Modality

来源:Arxiv_logoArxiv
英文摘要

In this paper, we propose SimMLM, a simple yet powerful framework for multimodal learning with missing modalities. Unlike existing approaches that rely on sophisticated network architectures or complex data imputation techniques, SimMLM provides a generic and effective solution that can adapt to various missing modality scenarios with improved accuracy and robustness. Specifically, SimMLM consists of a generic Dynamic Mixture of Modality Experts (DMoME) architecture, featuring a dynamic, learnable gating mechanism that automatically adjusts each modality's contribution in both full and partial modality settings. A key innovation of SimMLM is the proposed More vs. Fewer (MoFe) ranking loss, which ensures that task accuracy improves or remains stable as more modalities are made available. This aligns the model with an intuitive principle: removing one or more modalities should not increase accuracy. We validate SimMLM on multimodal medical image segmentation (BraTS 2018) and multimodal classification (UPMC Food-101, avMNIST) tasks, where it consistently surpasses competitive methods, demonstrating superior accuracy, interpretability, robustness, and reliability across both complete and missing modality scenarios at test time.

Sijie Li、Chen Chen、Jungong Han

计算技术、计算机技术

Sijie Li,Chen Chen,Jungong Han.SimMLM: A Simple Framework for Multi-modal Learning with Missing Modality[EB/OL].(2025-08-06)[2025-08-18].https://arxiv.org/abs/2507.19264.点此复制

评论