|国家预印本平台
首页|Vulnerability-Aware Spatio-Temporal Learning for Generalizable Deepfake Video Detection

Vulnerability-Aware Spatio-Temporal Learning for Generalizable Deepfake Video Detection

Vulnerability-Aware Spatio-Temporal Learning for Generalizable Deepfake Video Detection

来源:Arxiv_logoArxiv
英文摘要

Detecting deepfake videos is highly challenging given the complexity of characterizing spatio-temporal artifacts. Most existing methods rely on binary classifiers trained using real and fake image sequences, therefore hindering their generalization capabilities to unseen generation methods. Moreover, with the constant progress in generative Artificial Intelligence (AI), deepfake artifacts are becoming imperceptible at both the spatial and the temporal levels, making them extremely difficult to capture. To address these issues, we propose a fine-grained deepfake video detection approach called FakeSTormer that enforces the modeling of subtle spatio-temporal inconsistencies while avoiding overfitting. Specifically, we introduce a multi-task learning framework that incorporates two auxiliary branches for explicitly attending artifact-prone spatial and temporal regions. Additionally, we propose a video-level data synthesis strategy that generates pseudo-fake videos with subtle spatio-temporal artifacts, providing high-quality samples and hand-free annotations for our additional branches. Extensive experiments on several challenging benchmarks demonstrate the superiority of our approach compared to recent state-of-the-art methods. The code is available at https://github.com/10Ring/FakeSTormer.

Dat Nguyen、Marcella Astrid、Anis Kacem、Enjie Ghorbel、Djamila Aouada

计算技术、计算机技术

Dat Nguyen,Marcella Astrid,Anis Kacem,Enjie Ghorbel,Djamila Aouada.Vulnerability-Aware Spatio-Temporal Learning for Generalizable Deepfake Video Detection[EB/OL].(2025-07-19)[2025-08-16].https://arxiv.org/abs/2501.01184.点此复制

评论