Quantifying Memorization and Parametric Response Rates in Retrieval-Augmented Vision-Language Models
Quantifying Memorization and Parametric Response Rates in Retrieval-Augmented Vision-Language Models
Large Language Models (LLMs) demonstrate remarkable capabilities in question answering (QA), but metrics for assessing their reliance on memorization versus retrieval remain underdeveloped. Moreover, while finetuned models are state-of-the-art on closed-domain tasks, general-purpose models like GPT-4o exhibit strong zero-shot performance. This raises questions about the trade-offs between memorization, generalization, and retrieval. In this work, we analyze the extent to which multimodal retrieval-augmented VLMs memorize training data compared to baseline VLMs. Using the WebQA benchmark, we contrast finetuned models with baseline VLMs on multihop retrieval and question answering, examining the impact of finetuning on data memorization. To quantify memorization in end-to-end retrieval and QA systems, we propose several proxy metrics by investigating instances where QA succeeds despite retrieval failing. In line with existing work, we find that finetuned models rely more heavily on memorization than retrieval-augmented VLMs, and achieve higher accuracy as a result (72% vs 52% on WebQA test set). Finally, we present the first empirical comparison of the parametric effect between text and visual modalities. Here, we find that image-based questions have parametric response rates that are consistently 15-25% higher than for text-based questions in the WebQA dataset. As such, our measures pose a challenge for future work, both to account for differences in model memorization across different modalities and more generally to reconcile memorization and generalization in joint Retrieval-QA tasks.
Peter Carragher、Abhinand Jha、R Raghav、Kathleen M. Carley
计算技术、计算机技术
Peter Carragher,Abhinand Jha,R Raghav,Kathleen M. Carley.Quantifying Memorization and Parametric Response Rates in Retrieval-Augmented Vision-Language Models[EB/OL].(2025-02-19)[2025-08-02].https://arxiv.org/abs/2502.13836.点此复制
评论