EmoGist: Efficient In-Context Learning for Visual Emotion Understanding
EmoGist: Efficient In-Context Learning for Visual Emotion Understanding
In this paper, we introduce EmoGist, a training-free, in-context learning method for performing visual emotion classification with LVLMs. The key intuition of our approach is that context-dependent definition of emotion labels could allow more accurate predictions of emotions, as the ways in which emotions manifest within images are highly context dependent and nuanced. EmoGist pre-generates multiple explanations of emotion labels, by analyzing the clusters of example images belonging to each category. At test time, we retrieve a version of explanation based on embedding similarity, and feed it to a fast VLM for classification. Through our experiments, we show that EmoGist allows up to 13 points improvement in micro F1 scores with the multi-label Memotion dataset, and up to 8 points in macro F1 in the multi-class FI dataset.
Ronald Seoh、Dan Goldwasser
计算技术、计算机技术
Ronald Seoh,Dan Goldwasser.EmoGist: Efficient In-Context Learning for Visual Emotion Understanding[EB/OL].(2025-05-20)[2025-06-17].https://arxiv.org/abs/2505.14660.点此复制
评论