SpatialLLM: From Multi-modality Data to Urban Spatial Intelligence
SpatialLLM: From Multi-modality Data to Urban Spatial Intelligence
We propose SpatialLLM, a novel approach advancing spatial intelligence tasks in complex urban scenes. Unlike previous methods requiring geographic analysis tools or domain expertise, SpatialLLM is a unified language model directly addressing various spatial intelligence tasks without any training, fine-tuning, or expert intervention. The core of SpatialLLM lies in constructing detailed and structured scene descriptions from raw spatial data to prompt pre-trained LLMs for scene-based analysis. Extensive experiments show that, with our designs, pretrained LLMs can accurately perceive spatial distribution information and enable zero-shot execution of advanced spatial intelligence tasks, including urban planning, ecological analysis, traffic management, etc. We argue that multi-field knowledge, context length, and reasoning ability are key factors influencing LLM performances in urban analysis. We hope that SpatialLLM will provide a novel viable perspective for urban intelligent analysis and management. The code and dataset are available at https://github.com/WHU-USI3DV/SpatialLLM.
Jiabin Chen、Haiping Wang、Jinpeng Li、Yuan Liu、Zhen Dong、Bisheng Yang
测绘学遥感技术区域规划、城乡规划
Jiabin Chen,Haiping Wang,Jinpeng Li,Yuan Liu,Zhen Dong,Bisheng Yang.SpatialLLM: From Multi-modality Data to Urban Spatial Intelligence[EB/OL].(2025-05-19)[2025-07-09].https://arxiv.org/abs/2505.12703.点此复制
评论