|国家预印本平台
首页|Towards Scale-Aware Low-Light Enhancement via Structure-Guided Transformer Design

Towards Scale-Aware Low-Light Enhancement via Structure-Guided Transformer Design

Towards Scale-Aware Low-Light Enhancement via Structure-Guided Transformer Design

来源:Arxiv_logoArxiv
英文摘要

Current Low-light Image Enhancement (LLIE) techniques predominantly rely on either direct Low-Light (LL) to Normal-Light (NL) mappings or guidance from semantic features or illumination maps. Nonetheless, the intrinsic ill-posedness of LLIE and the difficulty in retrieving robust semantics from heavily corrupted images hinder their effectiveness in extremely low-light environments. To tackle this challenge, we present SG-LLIE, a new multi-scale CNN-Transformer hybrid framework guided by structure priors. Different from employing pre-trained models for the extraction of semantics or illumination maps, we choose to extract robust structure priors based on illumination-invariant edge detectors. Moreover, we develop a CNN-Transformer Hybrid Structure-Guided Feature Extractor (HSGFE) module at each scale with in the UNet encoder-decoder architecture. Besides the CNN blocks which excels in multi-scale feature extraction and fusion, we introduce a Structure-Guided Transformer Block (SGTB) in each HSGFE that incorporates structural priors to modulate the enhancement process. Extensive experiments show that our method achieves state-of-the-art performance on several LLIE benchmarks in both quantitative metrics and visual quality. Our solution ranks second in the NTIRE 2025 Low-Light Enhancement Challenge. Code is released at https://github.com/minyan8/imagine.

Wei Dong、Yan Min、Han Zhou、Jun Chen

计算技术、计算机技术

Wei Dong,Yan Min,Han Zhou,Jun Chen.Towards Scale-Aware Low-Light Enhancement via Structure-Guided Transformer Design[EB/OL].(2025-04-18)[2025-04-28].https://arxiv.org/abs/2504.14075.点此复制

评论