Solution for Meta KDD Cup'25: A Comprehensive Three-Step Framework for Vision Question Answering
Solution for Meta KDD Cup'25: A Comprehensive Three-Step Framework for Vision Question Answering
Vision Large Language Models (VLLMs) have improved multi-modal understanding and visual question answering (VQA), but still suffer from hallucinated answers. Multi-modal Retrieval-Augmented Generation (RAG) helps address these issues by incorporating external information, yet challenges remain in visual context comprehension, multi-source retrieval, and multi-turn interactions. To address these challenges, Meta constructed the CRAG-MM benchmark and launched the CRAG-MM Challenge at KDD Cup 2025, which consists of three tasks. This paper describes the solutions of all tasks in Meta KDD Cup'25 from BlackPearl team. We use a single model for each task, with key methods including data augmentation, RAG, reranking, and multi-task fine-tuning. Our solution achieve automatic evaluation rankings of 3rd, 3rd, and 1st on the three tasks, and win second place in Task3 after human evaluation.
Zijian Zhang、Xiaocheng Zhang、Yang Zhou、Zhimin Lin、Peng Yan
计算技术、计算机技术
Zijian Zhang,Xiaocheng Zhang,Yang Zhou,Zhimin Lin,Peng Yan.Solution for Meta KDD Cup'25: A Comprehensive Three-Step Framework for Vision Question Answering[EB/OL].(2025-07-29)[2025-08-11].https://arxiv.org/abs/2507.21520.点此复制
评论