Multi-Modal Semantic Parsing for the Interpretation of Tombstone Inscriptions
Multi-Modal Semantic Parsing for the Interpretation of Tombstone Inscriptions
Tombstones are historically and culturally rich artifacts, encapsulating individual lives, community memory, historical narratives and artistic expression. Yet, many tombstones today face significant preservation challenges, including physical erosion, vandalism, environmental degradation, and political shifts. In this paper, we introduce a novel multi-modal framework for tombstones digitization, aiming to improve the interpretation, organization and retrieval of tombstone content. Our approach leverages vision-language models (VLMs) to translate tombstone images into structured Tombstone Meaning Representations (TMRs), capturing both image and text information. To further enrich semantic parsing, we incorporate retrieval-augmented generation (RAG) for integrate externally dependent elements such as toponyms, occupation codes, and ontological concepts. Compared to traditional OCR-based pipelines, our method improves parsing accuracy from an F1 score of 36.1 to 89.5. We additionally evaluate the model's robustness across diverse linguistic and cultural inscriptions, and simulate physical degradation through image fusion to assess performance under noisy or damaged conditions. Our work represents the first attempt to formalize tombstone understanding using large vision-language models, presenting implications for heritage preservation.
Xiao Zhang、Johan Bos
信息传播、知识传播文物考古计算技术、计算机技术
Xiao Zhang,Johan Bos.Multi-Modal Semantic Parsing for the Interpretation of Tombstone Inscriptions[EB/OL].(2025-07-06)[2025-07-25].https://arxiv.org/abs/2507.04377.点此复制
评论