|国家预印本平台
首页|RePOPE: Impact of Annotation Errors on the POPE Benchmark

RePOPE: Impact of Annotation Errors on the POPE Benchmark

RePOPE: Impact of Annotation Errors on the POPE Benchmark

来源:Arxiv_logoArxiv
英文摘要

Since data annotation is costly, benchmark datasets often incorporate labels from established image datasets. In this work, we assess the impact of label errors in MSCOCO on the frequently used object hallucination benchmark POPE. We re-annotate the benchmark images and identify an imbalance in annotation errors across different subsets. Evaluating multiple models on the revised labels, which we denote as RePOPE, we observe notable shifts in model rankings, highlighting the impact of label quality. Code and data are available at https://github.com/YanNeu/RePOPE .

Yannic Neuhaus、Matthias Hein

计算技术、计算机技术

Yannic Neuhaus,Matthias Hein.RePOPE: Impact of Annotation Errors on the POPE Benchmark[EB/OL].(2025-04-22)[2025-06-24].https://arxiv.org/abs/2504.15707.点此复制

评论