|国家预印本平台
首页|Decoding Memes: Benchmarking Narrative Role Classification across Multilingual and Multimodal Models

Decoding Memes: Benchmarking Narrative Role Classification across Multilingual and Multimodal Models

Decoding Memes: Benchmarking Narrative Role Classification across Multilingual and Multimodal Models

来源:Arxiv_logoArxiv
英文摘要

This work investigates the challenging task of identifying narrative roles - Hero, Villain, Victim, and Other - in Internet memes, across three diverse test sets spanning English and code-mixed (English-Hindi) languages. Building on an annotated dataset originally skewed toward the 'Other' class, we explore a more balanced and linguistically diverse extension, originally introduced as part of the CLEF 2024 shared task. Comprehensive lexical and structural analyses highlight the nuanced, culture-specific, and context-rich language used in real memes, in contrast to synthetically curated hateful content, which exhibits explicit and repetitive lexical markers. To benchmark the role detection task, we evaluate a wide spectrum of models, including fine-tuned multilingual transformers, sentiment and abuse-aware classifiers, instruction-tuned LLMs, and multimodal vision-language models. Performance is assessed under zero-shot settings using precision, recall, and F1 metrics. While larger models like DeBERTa-v3 and Qwen2.5-VL demonstrate notable gains, results reveal consistent challenges in reliably identifying the 'Victim' class and generalising across cultural and code-mixed content. We also explore prompt design strategies to guide multimodal models and find that hybrid prompts incorporating structured instructions and role definitions offer marginal yet consistent improvements. Our findings underscore the importance of cultural grounding, prompt engineering, and multimodal reasoning in modelling subtle narrative framings in visual-textual content.

Shivam Sharma、Tanmoy Chakraborty

语言学信息传播、知识传播

Shivam Sharma,Tanmoy Chakraborty.Decoding Memes: Benchmarking Narrative Role Classification across Multilingual and Multimodal Models[EB/OL].(2025-06-29)[2025-07-16].https://arxiv.org/abs/2506.23122.点此复制

评论