Exploring Modality Disruption in Multimodal Fake News Detection
Exploring Modality Disruption in Multimodal Fake News Detection
The rapid growth of social media has led to the widespread dissemination of fake news across multiple content forms, including text, images, audio, and video. Compared to unimodal fake news detection, multimodal fake news detection benefits from the increased availability of information across multiple modalities. However, in the context of social media, certain modalities in multimodal fake news detection tasks may contain disruptive or over-expressive information. These elements often include exaggerated or embellished content. We define this phenomenon as modality disruption and explore its impact on detection models through experiments. To address the issue of modality disruption in a targeted manner, we propose a multimodal fake news detection framework, FND-MoE. Additionally, we design a two-pass feature selection mechanism to further mitigate the impact of modality disruption. Extensive experiments on the FakeSV and FVC-2018 datasets demonstrate that FND-MoE significantly outperforms state-of-the-art methods, with accuracy improvements of 3.45% and 3.71% on the respective datasets compared to baseline models.
Moyang Liu、Kaiying Yan、Yukun Liu、Ruibo Fu、Zhengqi Wen、Xuefei Liu、Chenxing Li
计算技术、计算机技术
Moyang Liu,Kaiying Yan,Yukun Liu,Ruibo Fu,Zhengqi Wen,Xuefei Liu,Chenxing Li.Exploring Modality Disruption in Multimodal Fake News Detection[EB/OL].(2025-04-12)[2025-06-03].https://arxiv.org/abs/2504.09154.点此复制
评论