On VLMs for Diverse Tasks in Multimodal Meme Classification
On VLMs for Diverse Tasks in Multimodal Meme Classification
In this paper, we present a comprehensive and systematic analysis of vision-language models (VLMs) for disparate meme classification tasks. We introduced a novel approach that generates a VLM-based understanding of meme images and fine-tunes the LLMs on textual understanding of the embedded meme text for improving the performance. Our contributions are threefold: (1) Benchmarking VLMs with diverse prompting strategies purposely to each sub-task; (2) Evaluating LoRA fine-tuning across all VLM components to assess performance gains; and (3) Proposing a novel approach where detailed meme interpretations generated by VLMs are used to train smaller language models (LLMs), significantly improving classification. The strategy of combining VLMs with LLMs improved the baseline performance by 8.34%, 3.52% and 26.24% for sarcasm, offensive and sentiment classification, respectively. Our results reveal the strengths and limitations of VLMs and present a novel strategy for meme understanding.
Deepesh Gavit、Debajyoti Mazumder、Samiran Das、Jasabanta Patro
计算技术、计算机技术
Deepesh Gavit,Debajyoti Mazumder,Samiran Das,Jasabanta Patro.On VLMs for Diverse Tasks in Multimodal Meme Classification[EB/OL].(2025-05-27)[2025-07-16].https://arxiv.org/abs/2505.20937.点此复制
评论