|国家预印本平台
首页|Detecting and Understanding Hateful Contents in Memes Through Captioning and Visual Question-Answering

Detecting and Understanding Hateful Contents in Memes Through Captioning and Visual Question-Answering

Detecting and Understanding Hateful Contents in Memes Through Captioning and Visual Question-Answering

来源:Arxiv_logoArxiv
英文摘要

Memes are widely used for humor and cultural commentary, but they are increasingly exploited to spread hateful content. Due to their multimodal nature, hateful memes often evade traditional text-only or image-only detection systems, particularly when they employ subtle or coded references. To address these challenges, we propose a multimodal hate detection framework that integrates key components: OCR to extract embedded text, captioning to describe visual content neutrally, sub-label classification for granular categorization of hateful content, RAG for contextually relevant retrieval, and VQA for iterative analysis of symbolic and contextual cues. This enables the framework to uncover latent signals that simpler pipelines fail to detect. Experimental results on the Facebook Hateful Memes dataset reveal that the proposed framework exceeds the performance of unimodal and conventional multimodal models in both accuracy and AUC-ROC.

Ali Anaissi、Junaid Akram、Kunal Chaturvedi、Ali Braytee

计算技术、计算机技术

Ali Anaissi,Junaid Akram,Kunal Chaturvedi,Ali Braytee.Detecting and Understanding Hateful Contents in Memes Through Captioning and Visual Question-Answering[EB/OL].(2025-04-23)[2025-07-16].https://arxiv.org/abs/2504.16723.点此复制

评论