Synthesizing Near-Boundary OOD Samples for Out-of-Distribution Detection
Synthesizing Near-Boundary OOD Samples for Out-of-Distribution Detection
Pre-trained vision-language models have exhibited remarkable abilities in detecting out-of-distribution (OOD) samples. However, some challenging OOD samples, which lie close to in-distribution (InD) data in image feature space, can still lead to misclassification. The emergence of foundation models like diffusion models and multimodal large language models (MLLMs) offers a potential solution to this issue. In this work, we propose SynOOD, a novel approach that harnesses foundation models to generate synthetic, challenging OOD data for fine-tuning CLIP models, thereby enhancing boundary-level discrimination between InD and OOD samples. Our method uses an iterative in-painting process guided by contextual prompts from MLLMs to produce nuanced, boundary-aligned OOD samples. These samples are refined through noise adjustments based on gradients from OOD scores like the energy score, effectively sampling from the InD/OOD boundary. With these carefully synthesized images, we fine-tune the CLIP image encoder and negative label features derived from the text encoder to strengthen connections between near-boundary OOD samples and a set of negative labels. Finally, SynOOD achieves state-of-the-art performance on the large-scale ImageNet benchmark, with minimal increases in parameters and runtime. Our approach significantly surpasses existing methods, improving AUROC by 2.80% and reducing FPR95 by 11.13%. Codes are available in https://github.com/Jarvisgivemeasuit/SynOOD.
Jinglun Li、Kaixun Jiang、Zhaoyu Chen、Bo Lin、Yao Tang、Weifeng Ge、Wenqiang Zhang
计算技术、计算机技术
Jinglun Li,Kaixun Jiang,Zhaoyu Chen,Bo Lin,Yao Tang,Weifeng Ge,Wenqiang Zhang.Synthesizing Near-Boundary OOD Samples for Out-of-Distribution Detection[EB/OL].(2025-07-14)[2025-08-02].https://arxiv.org/abs/2507.10225.点此复制
评论