3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation
3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation
Vision-Language Models (VLMs) have shown remarkable performance on diverse visual and linguistic tasks, yet they remain fundamentally limited in their understanding of 3D spatial structures. We propose Geometric Distillation, a lightweight, annotation-free fine-tuning framework that injects human-inspired geometric cues into pretrained VLMs without modifying their architecture. By distilling (1) sparse correspondences, (2) relative depth relations, and (3) dense cost volumes from off-the-shelf 3D foundation models (e.g., MASt3R, VGGT), our method shapes representations to be geometry-aware while remaining compatible with natural image-text inputs. Through extensive evaluations on 3D vision-language reasoning and 3D perception benchmarks, our method consistently outperforms prior approaches, achieving improved 3D spatial reasoning with significantly lower computational cost. Our work demonstrates a scalable and efficient path to bridge 2D-trained VLMs with 3D understanding, opening up wider use in spatially grounded multimodal tasks.
Seonho Lee、Jiho Choi、Inha Kang、Jiwook Kim、Junsung Park、Hyunjung Shim
计算技术、计算机技术
Seonho Lee,Jiho Choi,Inha Kang,Jiwook Kim,Junsung Park,Hyunjung Shim.3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation[EB/OL].(2025-06-11)[2025-07-01].https://arxiv.org/abs/2506.09883.点此复制
评论