|国家预印本平台
首页|Multimedia Verification Through Multi-Agent Deep Research Multimodal Large Language Models

Multimedia Verification Through Multi-Agent Deep Research Multimodal Large Language Models

Multimedia Verification Through Multi-Agent Deep Research Multimodal Large Language Models

来源:Arxiv_logoArxiv
英文摘要

This paper presents our submission to the ACMMM25 - Grand Challenge on Multimedia Verification. We developed a multi-agent verification system that combines Multimodal Large Language Models (MLLMs) with specialized verification tools to detect multimedia misinformation. Our system operates through six stages: raw data processing, planning, information extraction, deep research, evidence collection, and report generation. The core Deep Researcher Agent employs four tools: reverse image search, metadata analysis, fact-checking databases, and verified news processing that extracts spatial, temporal, attribution, and motivational context. We demonstrate our approach on a challenge dataset sample involving complex multimedia content. Our system successfully verified content authenticity, extracted precise geolocation and timing information, and traced source attribution across multiple platforms, effectively addressing real-world multimedia verification scenarios.

Huy Hoan Le、Van Sy Thinh Nguyen、Thi Le Chi Dang、Vo Thanh Khang Nguyen、Truong Thanh Hung Nguyen、Hung Cao

计算技术、计算机技术

Huy Hoan Le,Van Sy Thinh Nguyen,Thi Le Chi Dang,Vo Thanh Khang Nguyen,Truong Thanh Hung Nguyen,Hung Cao.Multimedia Verification Through Multi-Agent Deep Research Multimodal Large Language Models[EB/OL].(2025-07-06)[2025-07-17].https://arxiv.org/abs/2507.04410.点此复制

评论