Adapting Large VLMs with Iterative and Manual Instructions for Generative Low-light Enhancement
Adapting Large VLMs with Iterative and Manual Instructions for Generative Low-light Enhancement
Most existing low-light image enhancement (LLIE) methods rely on pre-trained model priors, low-light inputs, or both, while neglecting the semantic guidance available from normal-light images. This limitation hinders their effectiveness in complex lighting conditions. In this paper, we propose VLM-IMI, a novel framework that leverages large vision-language models (VLMs) with iterative and manual instructions (IMIs) for LLIE. VLM-IMI incorporates textual descriptions of the desired normal-light content as enhancement cues, enabling semantically informed restoration. To effectively integrate cross-modal priors, we introduce an instruction prior fusion module, which dynamically aligns and fuses image and text features, promoting the generation of detailed and semantically coherent outputs. During inference, we adopt an iterative and manual instruction strategy to refine textual instructions, progressively improving visual quality. This refinement enhances structural fidelity, semantic alignment, and the recovery of fine details under extremely low-light conditions. Extensive experiments across diverse scenarios demonstrate that VLM-IMI outperforms state-of-the-art methods in both quantitative metrics and perceptual quality. The source code is available at https://github.com/sunxiaoran01/VLM-IMI.
Xiaoran Sun、Liyan Wang、Cong Wang、Yeying Jin、Kin-man Lam、Zhixun Su、Yang Yang、Jinshan Pan
计算技术、计算机技术
Xiaoran Sun,Liyan Wang,Cong Wang,Yeying Jin,Kin-man Lam,Zhixun Su,Yang Yang,Jinshan Pan.Adapting Large VLMs with Iterative and Manual Instructions for Generative Low-light Enhancement[EB/OL].(2025-07-24)[2025-08-10].https://arxiv.org/abs/2507.18064.点此复制
评论