|国家预印本平台
首页|The P$^3$ dataset: Pixels, Points and Polygons for Multimodal Building Vectorization

The P$^3$ dataset: Pixels, Points and Polygons for Multimodal Building Vectorization

The P$^3$ dataset: Pixels, Points and Polygons for Multimodal Building Vectorization

来源:Arxiv_logoArxiv
英文摘要

We present the P$^3$ dataset, a large-scale multimodal benchmark for building vectorization, constructed from aerial LiDAR point clouds, high-resolution aerial imagery, and vectorized 2D building outlines, collected across three continents. The dataset contains over 10 billion LiDAR points with decimeter-level accuracy and RGB images at a ground sampling distance of 25 centimeter. While many existing datasets primarily focus on the image modality, P$^3$ offers a complementary perspective by also incorporating dense 3D information. We demonstrate that LiDAR point clouds serve as a robust modality for predicting building polygons, both in hybrid and end-to-end learning frameworks. Moreover, fusing aerial LiDAR and imagery further improves accuracy and geometric quality of predicted polygons. The P$^3$ dataset is publicly available, along with code and pretrained weights of three state-of-the-art models for building polygon prediction at https://github.com/raphaelsulzer/PixelsPointsPolygons .

Raphael Sulzer、Liuyun Duan、Nicolas Girard、Florent Lafarge

测绘学遥感技术

Raphael Sulzer,Liuyun Duan,Nicolas Girard,Florent Lafarge.The P$^3$ dataset: Pixels, Points and Polygons for Multimodal Building Vectorization[EB/OL].(2025-05-21)[2025-07-20].https://arxiv.org/abs/2505.15379.点此复制

评论