A Brief Review for Compression and Transfer Learning Techniques in DeepFake Detection
A Brief Review for Compression and Transfer Learning Techniques in DeepFake Detection
Training and deploying deepfake detection models on edge devices offers the advantage of maintaining data privacy and confidentiality by processing it close to its source. However, this approach is constrained by the limited computational and memory resources available at the edge. To address this challenge, we explore compression techniques to reduce computational demands and inference time, alongside transfer learning methods to minimize training overhead. Using the Synthbuster, RAISE, and ForenSynths datasets, we evaluate the effectiveness of pruning, knowledge distillation (KD), quantization, fine-tuning, and adapter-based techniques. Our experimental results demonstrate that both compression and transfer learning can be effectively achieved, even with a high compression level of 90%, remaining at the same performance level when the training and validation data originate from the same DeepFake model. However, when the testing dataset is generated by DeepFake models not present in the training set, a domain generalization issue becomes evident.
Andreas Karathanasis、John Violos、Ioannis Kompatsiaris、Symeon Papadopoulos
计算技术、计算机技术
Andreas Karathanasis,John Violos,Ioannis Kompatsiaris,Symeon Papadopoulos.A Brief Review for Compression and Transfer Learning Techniques in DeepFake Detection[EB/OL].(2025-04-29)[2025-06-30].https://arxiv.org/abs/2504.21066.点此复制
评论