Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification
Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification
Hyperspectral image (HSI) classification remains a challenging task due to the intricate spatial-spectral correlations. Existing transformer models excel in capturing long-range dependencies but often suffer from information redundancy and attention inefficiencies, limiting their ability to model fine-grained relationships crucial for HSI classification. To overcome these limitations, this work proposes MemFormer, a lightweight and memory-enhanced transformer. MemFormer introduces a memory-enhanced multi-head attention mechanism that iteratively refines a dynamic memory module, enhancing feature extraction while reducing redundancy across layers. Additionally, a dynamic memory enrichment strategy progressively captures complex spatial and spectral dependencies, leading to more expressive feature representations. To further improve structural consistency, we incorporate a spatial-spectral positional encoding (SSPE) tailored for HSI data, ensuring continuity without the computational burden of convolution-based approaches. Extensive experiments on benchmark datasets demonstrate that MemFormer achieves superior classification accuracy, outperforming state-of-the-art methods.
Salvatore Distefano、Adil Mehmood Khan、Muhammad Ahmad、Manuel Mazzara
计算技术、计算机技术
Salvatore Distefano,Adil Mehmood Khan,Muhammad Ahmad,Manuel Mazzara.Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification[EB/OL].(2025-04-17)[2025-07-16].https://arxiv.org/abs/2504.13242.点此复制
评论