Entity Re-identification in Visual Storytelling via Contrastive Reinforcement Learning
Entity Re-identification in Visual Storytelling via Contrastive Reinforcement Learning
Visual storytelling systems, particularly large vision-language models, struggle to maintain character and object identity across frames, often failing to recognize when entities in different images represent the same individuals or objects, leading to inconsistent references and referential hallucinations. This occurs because models lack explicit training on when to establish entity connections across frames. We propose a contrastive reinforcement learning approach that trains models to discriminate between coherent image sequences and stories from unrelated images. We extend the Story Reasoning dataset with synthetic negative examples to teach appropriate entity connection behavior. We employ Direct Preference Optimization with a dual-component reward function that promotes grounding and re-identification of entities in real stories while penalizing incorrect entity connections in synthetic contexts. Using this contrastive framework, we fine-tune Qwen Storyteller (based on Qwen2.5-VL 7B). Evaluation shows improvements in grounding mAP from 0.27 to 0.31 (+14.8%), F1 from 0.35 to 0.41 (+17.1%). Pronoun grounding accuracy improved across all pronoun types except "its", and cross-frame character and object persistence increased across all frame counts, with entities appearing in 5 or more frames advancing from 29.3% to 33.3% (+13.7%). Well-structured stories, containing the chain-of-thought and grounded story, increased from 79.1% to 97.5% (+23.3%).
Daniel A. P. Oliveira、David Martins de Matos
计算技术、计算机技术
Daniel A. P. Oliveira,David Martins de Matos.Entity Re-identification in Visual Storytelling via Contrastive Reinforcement Learning[EB/OL].(2025-07-11)[2025-07-22].https://arxiv.org/abs/2507.07340.点此复制
评论