COP-GEN-Beta: Unified Generative Modelling of COPernicus Imagery Thumbnails
COP-GEN-Beta: Unified Generative Modelling of COPernicus Imagery Thumbnails
In remote sensing, multi-modal data from various sensors capturing the same scene offers rich opportunities, but learning a unified representation across these modalities remains a significant challenge. Traditional methods have often been limited to single or dual-modality approaches. In this paper, we introduce COP-GEN-Beta, a generative diffusion model trained on optical, radar, and elevation data from the Major TOM dataset. What sets COP-GEN-Beta apart is its ability to map any subset of modalities to any other, enabling zero-shot modality translation after training. This is achieved through a sequence-based diffusion transformer, where each modality is controlled by its own timestep embedding. We extensively evaluate COP-GEN-Beta on thumbnail images from the Major TOM dataset, demonstrating its effectiveness in generating high-quality samples. Qualitative and quantitative evaluations validate the model's performance, highlighting its potential as a powerful pre-trained model for future remote sensing tasks.
Miguel Espinosa、Valerio Marsocci、Yuru Jia、Elliot J. Crowley、Mikolaj Czerkawski
遥感技术
Miguel Espinosa,Valerio Marsocci,Yuru Jia,Elliot J. Crowley,Mikolaj Czerkawski.COP-GEN-Beta: Unified Generative Modelling of COPernicus Imagery Thumbnails[EB/OL].(2025-04-11)[2025-05-02].https://arxiv.org/abs/2504.08548.点此复制
评论