|国家预印本平台
首页|CAMME: Adaptive Deepfake Image Detection with Multi-Modal Cross-Attention

CAMME: Adaptive Deepfake Image Detection with Multi-Modal Cross-Attention

CAMME: Adaptive Deepfake Image Detection with Multi-Modal Cross-Attention

来源:Arxiv_logoArxiv
英文摘要

The proliferation of sophisticated AI-generated deepfakes poses critical challenges for digital media authentication and societal security. While existing detection methods perform well within specific generative domains, they exhibit significant performance degradation when applied to manipulations produced by unseen architectures--a fundamental limitation as generative technologies rapidly evolve. We propose CAMME (Cross-Attention Multi-Modal Embeddings), a framework that dynamically integrates visual, textual, and frequency-domain features through a multi-head cross-attention mechanism to establish robust cross-domain generalization. Extensive experiments demonstrate CAMME's superiority over state-of-the-art methods, yielding improvements of 12.56% on natural scenes and 13.25% on facial deepfakes. The framework demonstrates exceptional resilience, maintaining (over 91%) accuracy under natural image perturbations and achieving 89.01% and 96.14% accuracy against PGD and FGSM adversarial attacks, respectively. Our findings validate that integrating complementary modalities through cross-attention enables more effective decision boundary realignment for reliable deepfake detection across heterogeneous generative architectures.

Naseem Khan、Tuan Nguyen、Amine Bermak、Issa Khalil

计算技术、计算机技术

Naseem Khan,Tuan Nguyen,Amine Bermak,Issa Khalil.CAMME: Adaptive Deepfake Image Detection with Multi-Modal Cross-Attention[EB/OL].(2025-05-23)[2025-07-20].https://arxiv.org/abs/2505.18035.点此复制

评论