Semantically Guided Adversarial Testing of Vision Models Using Language Models
Semantically Guided Adversarial Testing of Vision Models Using Language Models
In targeted adversarial attacks on vision models, the selection of the target label is a critical yet often overlooked determinant of attack success. This target label corresponds to the class that the attacker aims to force the model to predict. Now, existing strategies typically rely on randomness, model predictions, or static semantic resources, limiting interpretability, reproducibility, or flexibility. This paper then proposes a semantics-guided framework for adversarial target selection using the cross-modal knowledge transfer from pretrained language and vision-language models. We evaluate several state-of-the-art models (BERT, TinyLLAMA, and CLIP) as similarity sources to select the most and least semantically related labels with respect to the ground truth, forming best- and worst-case adversarial scenarios. Our experiments on three vision models and five attack methods reveal that these models consistently render practical adversarial targets and surpass static lexical databases, such as WordNet, particularly for distant class relationships. We also observe that static testing of target labels offers a preliminary assessment of the effectiveness of similarity sources, \textit{a priori} testing. Our results corroborate the suitability of pretrained models for constructing interpretable, standardized, and scalable adversarial benchmarks across architectures and datasets.
Katarzyna Filus、Jorge M. Cruz-Duarte
计算技术、计算机技术
Katarzyna Filus,Jorge M. Cruz-Duarte.Semantically Guided Adversarial Testing of Vision Models Using Language Models[EB/OL].(2025-08-15)[2025-08-28].https://arxiv.org/abs/2508.11341.点此复制
评论