Hyperspectral Image Land Cover Captioning Dataset for Vision Language Models
Hyperspectral Image Land Cover Captioning Dataset for Vision Language Models
We introduce HyperCap, the first large-scale hyperspectral captioning dataset designed to enhance model performance and effectiveness in remote sensing applications. Unlike traditional hyperspectral imaging (HSI) datasets that focus solely on classification tasks, HyperCap integrates spectral data with pixel-wise textual annotations, enabling deeper semantic understanding of hyperspectral imagery. This dataset enhances model performance in tasks like classification and feature extraction, providing a valuable resource for advanced remote sensing applications. HyperCap is constructed from four benchmark datasets and annotated through a hybrid approach combining automated and manual methods to ensure accuracy and consistency. Empirical evaluations using state-of-the-art encoders and diverse fusion techniques demonstrate significant improvements in classification performance. These results underscore the potential of vision-language learning in HSI and position HyperCap as a foundational dataset for future research in the field.
Aryan Das、Tanishq Rachamalla、Pravendra Singh、Koushik Biswas、Vinay Kumar Verma、Swalpa Kumar Roy
遥感技术
Aryan Das,Tanishq Rachamalla,Pravendra Singh,Koushik Biswas,Vinay Kumar Verma,Swalpa Kumar Roy.Hyperspectral Image Land Cover Captioning Dataset for Vision Language Models[EB/OL].(2025-05-17)[2025-07-16].https://arxiv.org/abs/2505.12217.点此复制
评论