Better Supervised Fine-tuning for VQA: Integer-Only Loss
Better Supervised Fine-tuning for VQA: Integer-Only Loss
With the rapid advancement of vision language models(VLM), their ability to assess visual content based on specific criteria and dimensions has become increasingly critical for applications such as video-theme consistency assessment and visual quality scoring. However, existing methods often suffer from imprecise results and inefficient loss calculation, which limit the focus of the model on key evaluation indicators. To address this, we propose IOVQA(Integer-only VQA), a novel fine-tuning approach tailored for VLMs to enhance their performance in video quality assessment tasks. The key innovation of IOVQA lies in its label construction and its targeted loss calculation mechanism. Specifically, during dataset curation, we constrain the model's output to integers within the range of [10,50], ensuring numerical stability, and convert decimal Overall_MOS to integer before using them as labels. We also introduce a target-mask strategy: when computing the loss, only the first two-digit-integer of the label is unmasked, forcing the model to learn the critical components of the numerical evaluation. After fine-tuning the Qwen2.5-VL model using the constructed dataset, experimental results demonstrate that the proposed method significantly improves the model's accuracy and consistency in the VQA task, ranking 3rd in VQualA 2025 GenAI-Bench AIGC Video Quality Assessment Challenge -- Track I. Our work highlights the effectiveness of merely leaving integer labels during fine-tuning, providing an effective idea for optimizing VLMs in quantitative evaluation scenarios.
Baihong Qian、Haotian Fan、Wenjie Liao、Yunqiu Wang、Tao Li、Junhui Cui
计算技术、计算机技术
Baihong Qian,Haotian Fan,Wenjie Liao,Yunqiu Wang,Tao Li,Junhui Cui.Better Supervised Fine-tuning for VQA: Integer-Only Loss[EB/OL].(2025-08-15)[2025-08-28].https://arxiv.org/abs/2508.11170.点此复制
评论