|国家预印本平台
首页|Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping

Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping

Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping

来源:Arxiv_logoArxiv
英文摘要

We present Sat2Sound, a multimodal representation learning framework for soundscape mapping, designed to predict the distribution of sounds at any location on Earth. Existing methods for this task rely on satellite image and paired geotagged audio samples, which often fail to capture the diversity of sound sources at a given location. To address this limitation, we enhance existing datasets by leveraging a Vision-Language Model (VLM) to generate semantically rich soundscape descriptions for locations depicted in satellite images. Our approach incorporates contrastive learning across audio, audio captions, satellite images, and satellite image captions. We hypothesize that there is a fixed set of soundscape concepts shared across modalities. To this end, we learn a shared codebook of soundscape concepts and represent each sample as a weighted average of these concepts. Sat2Sound achieves state-of-the-art performance in cross-modal retrieval between satellite image and audio on two datasets: GeoSound and SoundingEarth. Additionally, building on Sat2Sound's ability to retrieve detailed soundscape captions, we introduce a novel application: location-based soundscape synthesis, which enables immersive acoustic experiences. Our code and models will be publicly available.

Subash Khanal、Srikumar Sastry、Aayush Dhakal、Adeel Ahmad、Nathan Jacobs

遥感技术

Subash Khanal,Srikumar Sastry,Aayush Dhakal,Adeel Ahmad,Nathan Jacobs.Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping[EB/OL].(2025-05-19)[2025-06-12].https://arxiv.org/abs/2505.13777.点此复制

评论