|国家预印本平台
首页|DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts

DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts

DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts

来源:Arxiv_logoArxiv
英文摘要

The proliferation of disinformation demands reliable and scalable fact-checking solutions. We present Dynamic Evidence-based FAct-checking with Multimodal Experts (DEFAME), a modular, zero-shot MLLM pipeline for open-domain, text-image claim verification. DEFAME operates in a six-stage process, dynamically selecting the tools and search depth to extract and evaluate textual and visual evidence. Unlike prior approaches that are text-only, lack explainability, or rely solely on parametric knowledge, DEFAME performs end-to-end verification, accounting for images in claims and evidence while generating structured, multimodal reports. Evaluation on the popular benchmarks VERITE, AVerITeC, and MOCHEG shows that DEFAME surpasses all previous methods, establishing itself as the new state-of-the-art fact-checking system for uni- and multimodal fact-checking. Moreover, we introduce a new multimodal benchmark, ClaimReview2024+, featuring claims after the knowledge cutoff of GPT-4o, avoiding data leakage. Here, DEFAME drastically outperforms the GPT-4o baselines, showing temporal generalizability and the potential for real-time fact-checking.

Anna Rohrbach、Mark Rothermel、Marcus Rohrbach、Tobias Braun

计算技术、计算机技术信息传播、知识传播

Anna Rohrbach,Mark Rothermel,Marcus Rohrbach,Tobias Braun.DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts[EB/OL].(2025-07-24)[2025-08-23].https://arxiv.org/abs/2412.10510.点此复制

评论