|国家预印本平台
首页|DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning

DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning

DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning

来源:Arxiv_logoArxiv
英文摘要

Although Vision Language Models (VLMs) exhibit strong perceptual abilities and impressive visual reasoning, they struggle with attention to detail and precise action planning in complex, dynamic environments, leading to subpar performance. Real-world tasks typically require complex interactions, advanced spatial reasoning, long-term planning, and continuous strategy refinement, usually necessitating understanding the physics rules of the target scenario. However, evaluating these capabilities in real-world scenarios is often prohibitively expensive. To bridge this gap, we introduce DeepPHY, a novel benchmark framework designed to systematically evaluate VLMs' understanding and reasoning about fundamental physical principles through a series of challenging simulated environments. DeepPHY integrates multiple physical reasoning environments of varying difficulty levels and incorporates fine-grained evaluation metrics. Our evaluation finds that even state-of-the-art VLMs struggle to translate descriptive physical knowledge into precise, predictive control.

Xinrun Xu、Pi Bu、Ye Wang、Börje F. Karlsson、Ziming Wang、Tengtao Song、Qi Zhu、Jun Song、Zhiming Ding、Bo Zheng

计算技术、计算机技术

Xinrun Xu,Pi Bu,Ye Wang,Börje F. Karlsson,Ziming Wang,Tengtao Song,Qi Zhu,Jun Song,Zhiming Ding,Bo Zheng.DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning[EB/OL].(2025-08-07)[2025-08-18].https://arxiv.org/abs/2508.05405.点此复制

评论