SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision Tasks
SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision Tasks
Generalization of deep-learning-based (DL) computer vision algorithms to various image perturbations is hard to establish and remains an active area of research. The majority of past analyses focused on the images already captured, whereas effects of the image formation pipeline and environment are less studied. In this paper, we address this issue by analyzing the impact of capture conditions, such as camera parameters and lighting, on DL model performance on 3 vision tasks -- image classification, object detection, and visual question answering (VQA). To this end, we assess capture bias in common vision datasets and create a new benchmark, SNAP (for $\textbf{S}$hutter speed, ISO se$\textbf{N}$sitivity, and $\textbf{AP}$erture), consisting of images of objects taken under controlled lighting conditions and with densely sampled camera settings. We then evaluate a large number of DL vision models and show the effects of capture conditions on each selected vision task. Lastly, we conduct an experiment to establish a human baseline for the VQA task. Our results show that computer vision datasets are significantly biased, the models trained on this data do not reach human accuracy even on the well-exposed images, and are susceptible to both major exposure changes and minute variations of camera settings. Code and data can be found at https://github.com/ykotseruba/SNAP
Iuliia Kotseruba、John K. Tsotsos
计算技术、计算机技术
Iuliia Kotseruba,John K. Tsotsos.SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision Tasks[EB/OL].(2025-05-21)[2025-06-06].https://arxiv.org/abs/2505.15628.点此复制
评论