A Novel Lightweight Transformer with Edge-Aware Fusion for Remote Sensing Image Captioning
A Novel Lightweight Transformer with Edge-Aware Fusion for Remote Sensing Image Captioning
Transformer-based models have achieved strong performance in remote sensing image captioning by capturing long-range dependencies and contextual information. However, their practical deployment is hindered by high computational costs, especially in multi-modal frameworks that employ separate transformer-based encoders and decoders. In addition, existing remote sensing image captioning models primarily focus on high-level semantic extraction while often overlooking fine-grained structural features such as edges, contours, and object boundaries. To address these challenges, a lightweight transformer architecture is proposed by reducing the dimensionality of the encoder layers and employing a distilled version of GPT-2 as the decoder. A knowledge distillation strategy is used to transfer knowledge from a more complex teacher model to improve the performance of the lightweight network. Furthermore, an edge-aware enhancement strategy is incorporated to enhance image representation and object boundary understanding, enabling the model to capture fine-grained spatial details in remote sensing images. Experimental results demonstrate that the proposed approach significantly improves caption quality compared to state-of-the-art methods.
Swadhin Das、Divyansh Mundra、Priyanshu Dayal、Raksha Sharma
计算技术、计算机技术遥感技术
Swadhin Das,Divyansh Mundra,Priyanshu Dayal,Raksha Sharma.A Novel Lightweight Transformer with Edge-Aware Fusion for Remote Sensing Image Captioning[EB/OL].(2025-06-11)[2025-06-24].https://arxiv.org/abs/2506.09429.点此复制
评论