|国家预印本平台
首页|Large Multi-modal Model Cartographic Map Comprehension for Textual Locality Georeferencing

Large Multi-modal Model Cartographic Map Comprehension for Textual Locality Georeferencing

Large Multi-modal Model Cartographic Map Comprehension for Textual Locality Georeferencing

来源:Arxiv_logoArxiv
英文摘要

Millions of biological sample records collected in the last few centuries archived in natural history collections are un-georeferenced. Georeferencing complex locality descriptions associated with these collection samples is a highly labour-intensive task collection agencies struggle with. None of the existing automated methods exploit maps that are an essential tool for georeferencing complex relations. We present preliminary experiments and results of a novel method that exploits multi-modal capabilities of recent Large Multi-Modal Models (LMM). This method enables the model to visually contextualize spatial relations it reads in the locality description. We use a grid-based approach to adapt these auto-regressive models for this task in a zero-shot setting. Our experiments conducted on a small manually annotated dataset show impressive results for our approach ($\sim$1 km Average distance error) compared to uni-modal georeferencing with Large Language Models and existing georeferencing tools. The paper also discusses the findings of the experiments in light of an LMM's ability to comprehend fine-grained maps. Motivated by these results, a practical framework is proposed to integrate this method into a georeferencing workflow.

Kalana Wijegunarathna、Kristin Stock、Christopher B. Jones

测绘学生物科学研究方法、生物科学研究技术

Kalana Wijegunarathna,Kristin Stock,Christopher B. Jones.Large Multi-modal Model Cartographic Map Comprehension for Textual Locality Georeferencing[EB/OL].(2025-07-11)[2025-07-25].https://arxiv.org/abs/2507.08575.点此复制

评论