Infrared and Visible Image Fusion Based on Implicit Neural Representations
Infrared and Visible Image Fusion Based on Implicit Neural Representations
Infrared and visible light image fusion aims to combine the strengths of both modalities to generate images that are rich in information and fulfill visual or computational requirements. This paper proposes an image fusion method based on Implicit Neural Representations (INR), referred to as INRFuse. This method parameterizes a continuous function through a neural network to implicitly represent the multimodal information of the image, breaking through the traditional reliance on discrete pixels or explicit features. The normalized spatial coordinates of the infrared and visible light images serve as inputs, and multi-layer perceptrons is utilized to adaptively fuse the features of both modalities, resulting in the output of the fused image. By designing multiple loss functions, the method jointly optimizes the similarity between the fused image and the original images, effectively preserving the thermal radiation information of the infrared image while maintaining the texture details of the visible light image. Furthermore, the resolution-independent characteristic of INR allows for the direct fusion of images with varying resolutions and achieves super-resolution reconstruction through high-density coordinate queries. Experimental results indicate that INRFuse outperforms existing methods in both subjective visual quality and objective evaluation metrics, producing fused images with clear structures, natural details, and rich information without the necessity for a training dataset.
Shuchen Sun、Ligen Shi、Chang Liu、Lina Wu、Jun Qiu
计算技术、计算机技术
Shuchen Sun,Ligen Shi,Chang Liu,Lina Wu,Jun Qiu.Infrared and Visible Image Fusion Based on Implicit Neural Representations[EB/OL].(2025-06-20)[2025-07-25].https://arxiv.org/abs/2506.16773.点此复制
评论