Ascending the Infinite Ladder: Benchmarking Spatial Deformation Reasoning in Vision-Language Models
Ascending the Infinite Ladder: Benchmarking Spatial Deformation Reasoning in Vision-Language Models
Humans naturally possess the spatial reasoning ability to form and manipulate images and structures of objects in space. There is an increasing effort to endow Vision-Language Models (VLMs) with similar spatial reasoning capabilities. However, it remains unclear whether these models truly understand and manipulate spatial objects or not. To address this question, we propose a new evaluation framework aimed at assessing the performance of VLMs in spatial deformation reasoning tasks. Specifically, we construct a benchmark for spatial deformation reasoning from 2D to 3D. Leveraging our data engine, we can generate unlimited evaluation problem pairs with infinite steps, without any data leakage. We explore whether the model can effectively perform spatial deformation reasoning from two directions: forward reasoning (given the operations, find the final state) and reverse reasoning (given the final state, determine the operations). We adopt a ladder competition format, using the number of deformation steps as the level classification criterion, with the goal of exploring the boundaries of the model's deformation reasoning capabilities. Interestingly, the benchmarking results reveal that almost no model demonstrates plausible spatial deformation reasoning abilities. Furthermore, even after applying targeted training and mainstream reasoning enhancement methods, the models are still unable to perform well on 3D spatial deformation reasoning.
Jiahuan Zhang、Shunwen Bai、Tianheng Wang、Kaiwen Guo、Kai Han、Guozheng Rao、Kaicheng Yu
自然科学研究方法
Jiahuan Zhang,Shunwen Bai,Tianheng Wang,Kaiwen Guo,Kai Han,Guozheng Rao,Kaicheng Yu.Ascending the Infinite Ladder: Benchmarking Spatial Deformation Reasoning in Vision-Language Models[EB/OL].(2025-07-01)[2025-07-19].https://arxiv.org/abs/2507.02978.点此复制
评论